Test Report: KVM_Linux_crio 21132

                    
                      58bc2bd16d03f6a9f0bea0abc55166132e65bd2e:2025-09-07:41313
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.1
244 TestPreload 175.01
291 TestPause/serial/SecondStartNoReconfiguration 66.12
x
+
TestAddons/parallel/Ingress (158.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-331285 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-331285 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-331285 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [6a8d4be2-7a52-401c-a82d-77a73e46f2f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [6a8d4be2-7a52-401c-a82d-77a73e46f2f9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005345205s
I0906 23:50:25.767588  133025 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-331285 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.388341699s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-331285 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.179
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-331285 -n addons-331285
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 logs -n 25: (1.639819015s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-045568                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-045568 │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │ 06 Sep 25 23:46 UTC │
	│ start   │ --download-only -p binary-mirror-826525 --alsologtostderr --binary-mirror http://127.0.0.1:35655 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-826525 │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │                     │
	│ delete  │ -p binary-mirror-826525                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-826525 │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │ 06 Sep 25 23:46 UTC │
	│ addons  │ enable dashboard -p addons-331285                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-331285                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │                     │
	│ start   │ -p addons-331285 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │ 06 Sep 25 23:49 UTC │
	│ addons  │ addons-331285 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:49 UTC │
	│ addons  │ addons-331285 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:49 UTC │
	│ addons  │ addons-331285 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:49 UTC │
	│ addons  │ addons-331285 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:49 UTC │
	│ ssh     │ addons-331285 ssh cat /opt/local-path-provisioner/pvc-1da79120-da54-4adf-b24c-2d5a1d1dd2da_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:49 UTC │
	│ addons  │ addons-331285 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ enable headlamp -p addons-331285 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:49 UTC │
	│ addons  │ addons-331285 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:49 UTC │ 06 Sep 25 23:49 UTC │
	│ ip      │ addons-331285 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ addons-331285 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ addons-331285 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ addons-331285 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ addons-331285 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-331285                                                                                                                                                                                                                                                                                                                                                                                         │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ addons-331285 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ ssh     │ addons-331285 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │                     │
	│ addons  │ addons-331285 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ addons  │ addons-331285 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:50 UTC │ 06 Sep 25 23:50 UTC │
	│ ip      │ addons-331285 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-331285        │ jenkins │ v1.36.0 │ 06 Sep 25 23:52 UTC │ 06 Sep 25 23:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/06 23:46:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:46:44.715501  133633 out.go:360] Setting OutFile to fd 1 ...
	I0906 23:46:44.715635  133633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:46:44.715644  133633 out.go:374] Setting ErrFile to fd 2...
	I0906 23:46:44.715648  133633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:46:44.715965  133633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0906 23:46:44.716767  133633 out.go:368] Setting JSON to false
	I0906 23:46:44.717632  133633 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1748,"bootTime":1757200657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:46:44.717733  133633 start.go:140] virtualization: kvm guest
	I0906 23:46:44.758715  133633 out.go:179] * [addons-331285] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:46:44.841690  133633 notify.go:220] Checking for updates...
	I0906 23:46:44.872108  133633 out.go:179]   - MINIKUBE_LOCATION=21132
	I0906 23:46:44.873539  133633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:46:44.874983  133633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0906 23:46:44.876259  133633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0906 23:46:44.877488  133633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:46:44.878782  133633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:46:44.880132  133633 driver.go:421] Setting default libvirt URI to qemu:///system
	I0906 23:46:44.913315  133633 out.go:179] * Using the kvm2 driver based on user configuration
	I0906 23:46:44.914542  133633 start.go:304] selected driver: kvm2
	I0906 23:46:44.914562  133633 start.go:918] validating driver "kvm2" against <nil>
	I0906 23:46:44.914575  133633 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:46:44.915310  133633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:46:44.915383  133633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21132-128697/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 23:46:44.931187  133633 install.go:137] /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0906 23:46:44.931243  133633 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0906 23:46:44.931533  133633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 23:46:44.931565  133633 cni.go:84] Creating CNI manager for ""
	I0906 23:46:44.931609  133633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:46:44.931618  133633 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 23:46:44.931673  133633 start.go:348] cluster config:
	{Name:addons-331285 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-331285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0906 23:46:44.931768  133633 iso.go:125] acquiring lock: {Name:mk3bd5f7fbe7836651644a94b41f2b6111c9b69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:46:44.934426  133633 out.go:179] * Starting "addons-331285" primary control-plane node in "addons-331285" cluster
	I0906 23:46:44.935842  133633 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0906 23:46:44.935884  133633 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0906 23:46:44.935894  133633 cache.go:58] Caching tarball of preloaded images
	I0906 23:46:44.935983  133633 preload.go:172] Found /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 23:46:44.935994  133633 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0906 23:46:44.936315  133633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/config.json ...
	I0906 23:46:44.936339  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/config.json: {Name:mk89ebacd4caa847e579cd863e1a04550ff09aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:46:44.936482  133633 start.go:360] acquireMachinesLock for addons-331285: {Name:mk3b58ef42f26d446b63d531f457f6ac8953e3f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 23:46:44.937086  133633 start.go:364] duration metric: took 587.521µs to acquireMachinesLock for "addons-331285"
	I0906 23:46:44.937112  133633 start.go:93] Provisioning new machine with config: &{Name:addons-331285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:addons-331285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 23:46:44.937190  133633 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 23:46:44.938850  133633 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0906 23:46:44.939024  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:46:44.939085  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:46:44.954842  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0906 23:46:44.955498  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:46:44.956195  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:46:44.956221  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:46:44.956673  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:46:44.956899  133633 main.go:141] libmachine: (addons-331285) Calling .GetMachineName
	I0906 23:46:44.957066  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:46:44.957232  133633 start.go:159] libmachine.API.Create for "addons-331285" (driver="kvm2")
	I0906 23:46:44.957264  133633 client.go:168] LocalClient.Create starting
	I0906 23:46:44.957333  133633 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem
	I0906 23:46:45.233483  133633 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem
	I0906 23:46:45.252515  133633 main.go:141] libmachine: Running pre-create checks...
	I0906 23:46:45.252543  133633 main.go:141] libmachine: (addons-331285) Calling .PreCreateCheck
	I0906 23:46:45.253144  133633 main.go:141] libmachine: (addons-331285) Calling .GetConfigRaw
	I0906 23:46:45.253618  133633 main.go:141] libmachine: Creating machine...
	I0906 23:46:45.253633  133633 main.go:141] libmachine: (addons-331285) Calling .Create
	I0906 23:46:45.253806  133633 main.go:141] libmachine: (addons-331285) creating KVM machine...
	I0906 23:46:45.253821  133633 main.go:141] libmachine: (addons-331285) creating network...
	I0906 23:46:45.255244  133633 main.go:141] libmachine: (addons-331285) DBG | found existing default KVM network
	I0906 23:46:45.256719  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:45.256529  133655 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112dd0}
	I0906 23:46:45.256780  133633 main.go:141] libmachine: (addons-331285) DBG | created network xml: 
	I0906 23:46:45.256798  133633 main.go:141] libmachine: (addons-331285) DBG | <network>
	I0906 23:46:45.256806  133633 main.go:141] libmachine: (addons-331285) DBG |   <name>mk-addons-331285</name>
	I0906 23:46:45.256813  133633 main.go:141] libmachine: (addons-331285) DBG |   <dns enable='no'/>
	I0906 23:46:45.256839  133633 main.go:141] libmachine: (addons-331285) DBG |   
	I0906 23:46:45.256863  133633 main.go:141] libmachine: (addons-331285) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 23:46:45.256873  133633 main.go:141] libmachine: (addons-331285) DBG |     <dhcp>
	I0906 23:46:45.256885  133633 main.go:141] libmachine: (addons-331285) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 23:46:45.256894  133633 main.go:141] libmachine: (addons-331285) DBG |     </dhcp>
	I0906 23:46:45.256901  133633 main.go:141] libmachine: (addons-331285) DBG |   </ip>
	I0906 23:46:45.256907  133633 main.go:141] libmachine: (addons-331285) DBG |   
	I0906 23:46:45.256914  133633 main.go:141] libmachine: (addons-331285) DBG | </network>
	I0906 23:46:45.256925  133633 main.go:141] libmachine: (addons-331285) DBG | 
	I0906 23:46:45.262385  133633 main.go:141] libmachine: (addons-331285) DBG | trying to create private KVM network mk-addons-331285 192.168.39.0/24...
	I0906 23:46:45.341770  133633 main.go:141] libmachine: (addons-331285) DBG | private KVM network mk-addons-331285 192.168.39.0/24 created
	I0906 23:46:45.341812  133633 main.go:141] libmachine: (addons-331285) setting up store path in /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285 ...
	I0906 23:46:45.341823  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:45.341745  133655 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21132-128697/.minikube
	I0906 23:46:45.341831  133633 main.go:141] libmachine: (addons-331285) building disk image from file:///home/jenkins/minikube-integration/21132-128697/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0906 23:46:45.342084  133633 main.go:141] libmachine: (addons-331285) Downloading /home/jenkins/minikube-integration/21132-128697/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21132-128697/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0906 23:46:45.691328  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:45.691146  133655 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa...
	I0906 23:46:45.749433  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:45.749223  133655 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/addons-331285.rawdisk...
	I0906 23:46:45.749463  133633 main.go:141] libmachine: (addons-331285) DBG | Writing magic tar header
	I0906 23:46:45.749474  133633 main.go:141] libmachine: (addons-331285) DBG | Writing SSH key tar header
	I0906 23:46:45.749481  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:45.749381  133655 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285 ...
	I0906 23:46:45.749491  133633 main.go:141] libmachine: (addons-331285) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285
	I0906 23:46:45.749569  133633 main.go:141] libmachine: (addons-331285) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285 (perms=drwx------)
	I0906 23:46:45.749592  133633 main.go:141] libmachine: (addons-331285) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube/machines
	I0906 23:46:45.749600  133633 main.go:141] libmachine: (addons-331285) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube/machines (perms=drwxr-xr-x)
	I0906 23:46:45.749609  133633 main.go:141] libmachine: (addons-331285) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube
	I0906 23:46:45.749634  133633 main.go:141] libmachine: (addons-331285) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697
	I0906 23:46:45.749649  133633 main.go:141] libmachine: (addons-331285) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0906 23:46:45.749660  133633 main.go:141] libmachine: (addons-331285) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube (perms=drwxr-xr-x)
	I0906 23:46:45.749672  133633 main.go:141] libmachine: (addons-331285) setting executable bit set on /home/jenkins/minikube-integration/21132-128697 (perms=drwxrwxr-x)
	I0906 23:46:45.749684  133633 main.go:141] libmachine: (addons-331285) DBG | checking permissions on dir: /home/jenkins
	I0906 23:46:45.749695  133633 main.go:141] libmachine: (addons-331285) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 23:46:45.749706  133633 main.go:141] libmachine: (addons-331285) DBG | checking permissions on dir: /home
	I0906 23:46:45.749721  133633 main.go:141] libmachine: (addons-331285) DBG | skipping /home - not owner
	I0906 23:46:45.749738  133633 main.go:141] libmachine: (addons-331285) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 23:46:45.749744  133633 main.go:141] libmachine: (addons-331285) creating domain...
	I0906 23:46:45.750820  133633 main.go:141] libmachine: (addons-331285) define libvirt domain using xml: 
	I0906 23:46:45.750834  133633 main.go:141] libmachine: (addons-331285) <domain type='kvm'>
	I0906 23:46:45.750841  133633 main.go:141] libmachine: (addons-331285)   <name>addons-331285</name>
	I0906 23:46:45.750851  133633 main.go:141] libmachine: (addons-331285)   <memory unit='MiB'>4096</memory>
	I0906 23:46:45.750860  133633 main.go:141] libmachine: (addons-331285)   <vcpu>2</vcpu>
	I0906 23:46:45.750865  133633 main.go:141] libmachine: (addons-331285)   <features>
	I0906 23:46:45.750873  133633 main.go:141] libmachine: (addons-331285)     <acpi/>
	I0906 23:46:45.750883  133633 main.go:141] libmachine: (addons-331285)     <apic/>
	I0906 23:46:45.750890  133633 main.go:141] libmachine: (addons-331285)     <pae/>
	I0906 23:46:45.750894  133633 main.go:141] libmachine: (addons-331285)     
	I0906 23:46:45.750911  133633 main.go:141] libmachine: (addons-331285)   </features>
	I0906 23:46:45.750920  133633 main.go:141] libmachine: (addons-331285)   <cpu mode='host-passthrough'>
	I0906 23:46:45.750934  133633 main.go:141] libmachine: (addons-331285)   
	I0906 23:46:45.750944  133633 main.go:141] libmachine: (addons-331285)   </cpu>
	I0906 23:46:45.750952  133633 main.go:141] libmachine: (addons-331285)   <os>
	I0906 23:46:45.750958  133633 main.go:141] libmachine: (addons-331285)     <type>hvm</type>
	I0906 23:46:45.750998  133633 main.go:141] libmachine: (addons-331285)     <boot dev='cdrom'/>
	I0906 23:46:45.751028  133633 main.go:141] libmachine: (addons-331285)     <boot dev='hd'/>
	I0906 23:46:45.751039  133633 main.go:141] libmachine: (addons-331285)     <bootmenu enable='no'/>
	I0906 23:46:45.751054  133633 main.go:141] libmachine: (addons-331285)   </os>
	I0906 23:46:45.751132  133633 main.go:141] libmachine: (addons-331285)   <devices>
	I0906 23:46:45.751158  133633 main.go:141] libmachine: (addons-331285)     <disk type='file' device='cdrom'>
	I0906 23:46:45.751171  133633 main.go:141] libmachine: (addons-331285)       <source file='/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/boot2docker.iso'/>
	I0906 23:46:45.751180  133633 main.go:141] libmachine: (addons-331285)       <target dev='hdc' bus='scsi'/>
	I0906 23:46:45.751188  133633 main.go:141] libmachine: (addons-331285)       <readonly/>
	I0906 23:46:45.751210  133633 main.go:141] libmachine: (addons-331285)     </disk>
	I0906 23:46:45.751219  133633 main.go:141] libmachine: (addons-331285)     <disk type='file' device='disk'>
	I0906 23:46:45.751230  133633 main.go:141] libmachine: (addons-331285)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 23:46:45.751240  133633 main.go:141] libmachine: (addons-331285)       <source file='/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/addons-331285.rawdisk'/>
	I0906 23:46:45.751245  133633 main.go:141] libmachine: (addons-331285)       <target dev='hda' bus='virtio'/>
	I0906 23:46:45.751253  133633 main.go:141] libmachine: (addons-331285)     </disk>
	I0906 23:46:45.751257  133633 main.go:141] libmachine: (addons-331285)     <interface type='network'>
	I0906 23:46:45.751272  133633 main.go:141] libmachine: (addons-331285)       <source network='mk-addons-331285'/>
	I0906 23:46:45.751287  133633 main.go:141] libmachine: (addons-331285)       <model type='virtio'/>
	I0906 23:46:45.751328  133633 main.go:141] libmachine: (addons-331285)     </interface>
	I0906 23:46:45.751354  133633 main.go:141] libmachine: (addons-331285)     <interface type='network'>
	I0906 23:46:45.751369  133633 main.go:141] libmachine: (addons-331285)       <source network='default'/>
	I0906 23:46:45.751380  133633 main.go:141] libmachine: (addons-331285)       <model type='virtio'/>
	I0906 23:46:45.751393  133633 main.go:141] libmachine: (addons-331285)     </interface>
	I0906 23:46:45.751404  133633 main.go:141] libmachine: (addons-331285)     <serial type='pty'>
	I0906 23:46:45.751416  133633 main.go:141] libmachine: (addons-331285)       <target port='0'/>
	I0906 23:46:45.751434  133633 main.go:141] libmachine: (addons-331285)     </serial>
	I0906 23:46:45.751444  133633 main.go:141] libmachine: (addons-331285)     <console type='pty'>
	I0906 23:46:45.751454  133633 main.go:141] libmachine: (addons-331285)       <target type='serial' port='0'/>
	I0906 23:46:45.751469  133633 main.go:141] libmachine: (addons-331285)     </console>
	I0906 23:46:45.751482  133633 main.go:141] libmachine: (addons-331285)     <rng model='virtio'>
	I0906 23:46:45.751494  133633 main.go:141] libmachine: (addons-331285)       <backend model='random'>/dev/random</backend>
	I0906 23:46:45.751506  133633 main.go:141] libmachine: (addons-331285)     </rng>
	I0906 23:46:45.751515  133633 main.go:141] libmachine: (addons-331285)     
	I0906 23:46:45.751524  133633 main.go:141] libmachine: (addons-331285)     
	I0906 23:46:45.751533  133633 main.go:141] libmachine: (addons-331285)   </devices>
	I0906 23:46:45.751545  133633 main.go:141] libmachine: (addons-331285) </domain>
	I0906 23:46:45.751558  133633 main.go:141] libmachine: (addons-331285) 
	I0906 23:46:45.757609  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:42:4d:0c in network default
	I0906 23:46:45.758123  133633 main.go:141] libmachine: (addons-331285) starting domain...
	I0906 23:46:45.758144  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:45.758153  133633 main.go:141] libmachine: (addons-331285) ensuring networks are active...
	I0906 23:46:45.758944  133633 main.go:141] libmachine: (addons-331285) Ensuring network default is active
	I0906 23:46:45.759343  133633 main.go:141] libmachine: (addons-331285) Ensuring network mk-addons-331285 is active
	I0906 23:46:45.759948  133633 main.go:141] libmachine: (addons-331285) getting domain XML...
	I0906 23:46:45.760760  133633 main.go:141] libmachine: (addons-331285) creating domain...
	I0906 23:46:46.282179  133633 main.go:141] libmachine: (addons-331285) waiting for IP...
	I0906 23:46:46.282931  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:46.283305  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:46.283390  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:46.283305  133655 retry.go:31] will retry after 265.930631ms: waiting for domain to come up
	I0906 23:46:46.550727  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:46.551200  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:46.551257  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:46.551175  133655 retry.go:31] will retry after 282.606003ms: waiting for domain to come up
	I0906 23:46:46.835897  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:46.836321  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:46.836356  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:46.836294  133655 retry.go:31] will retry after 399.711471ms: waiting for domain to come up
	I0906 23:46:47.238108  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:47.238564  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:47.238608  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:47.238546  133655 retry.go:31] will retry after 416.170979ms: waiting for domain to come up
	I0906 23:46:47.656163  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:47.656564  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:47.656590  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:47.656520  133655 retry.go:31] will retry after 625.002298ms: waiting for domain to come up
	I0906 23:46:48.283560  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:48.283975  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:48.284005  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:48.283944  133655 retry.go:31] will retry after 589.171587ms: waiting for domain to come up
	I0906 23:46:48.874845  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:48.875238  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:48.875261  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:48.875205  133655 retry.go:31] will retry after 1.188995023s: waiting for domain to come up
	I0906 23:46:50.066210  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:50.066679  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:50.066712  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:50.066641  133655 retry.go:31] will retry after 1.450707575s: waiting for domain to come up
	I0906 23:46:51.518902  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:51.519294  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:51.519318  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:51.519269  133655 retry.go:31] will retry after 1.795155866s: waiting for domain to come up
	I0906 23:46:53.317347  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:53.317679  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:53.317711  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:53.317662  133655 retry.go:31] will retry after 2.028707014s: waiting for domain to come up
	I0906 23:46:55.348217  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:55.348695  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:55.348726  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:55.348647  133655 retry.go:31] will retry after 2.389711266s: waiting for domain to come up
	I0906 23:46:57.740203  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:46:57.740787  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:46:57.740822  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:46:57.740711  133655 retry.go:31] will retry after 2.260015754s: waiting for domain to come up
	I0906 23:47:00.001930  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:00.002333  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:47:00.002361  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:47:00.002308  133655 retry.go:31] will retry after 3.339935776s: waiting for domain to come up
	I0906 23:47:03.345908  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:03.346379  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find current IP address of domain addons-331285 in network mk-addons-331285
	I0906 23:47:03.346424  133633 main.go:141] libmachine: (addons-331285) DBG | I0906 23:47:03.346352  133655 retry.go:31] will retry after 4.165890688s: waiting for domain to come up
	I0906 23:47:07.515451  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.515965  133633 main.go:141] libmachine: (addons-331285) found domain IP: 192.168.39.179
	I0906 23:47:07.516004  133633 main.go:141] libmachine: (addons-331285) reserving static IP address...
	I0906 23:47:07.516018  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has current primary IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.516409  133633 main.go:141] libmachine: (addons-331285) DBG | unable to find host DHCP lease matching {name: "addons-331285", mac: "52:54:00:75:b8:ba", ip: "192.168.39.179"} in network mk-addons-331285
	I0906 23:47:07.597181  133633 main.go:141] libmachine: (addons-331285) reserved static IP address 192.168.39.179 for domain addons-331285
	I0906 23:47:07.597216  133633 main.go:141] libmachine: (addons-331285) DBG | Getting to WaitForSSH function...
	I0906 23:47:07.597225  133633 main.go:141] libmachine: (addons-331285) waiting for SSH...
	I0906 23:47:07.600016  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.600581  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:07.600606  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.600819  133633 main.go:141] libmachine: (addons-331285) DBG | Using SSH client type: external
	I0906 23:47:07.600848  133633 main.go:141] libmachine: (addons-331285) DBG | Using SSH private key: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa (-rw-------)
	I0906 23:47:07.600879  133633 main.go:141] libmachine: (addons-331285) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 23:47:07.600891  133633 main.go:141] libmachine: (addons-331285) DBG | About to run SSH command:
	I0906 23:47:07.600905  133633 main.go:141] libmachine: (addons-331285) DBG | exit 0
	I0906 23:47:07.733213  133633 main.go:141] libmachine: (addons-331285) DBG | SSH cmd err, output: <nil>: 
	I0906 23:47:07.733498  133633 main.go:141] libmachine: (addons-331285) KVM machine creation complete
	I0906 23:47:07.733860  133633 main.go:141] libmachine: (addons-331285) Calling .GetConfigRaw
	I0906 23:47:07.734393  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:07.734642  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:07.734817  133633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 23:47:07.734834  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:07.736019  133633 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 23:47:07.736034  133633 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 23:47:07.736039  133633 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 23:47:07.736044  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:07.738292  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.738666  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:07.738689  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.738848  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:07.739026  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:07.739208  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:07.739342  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:07.739499  133633 main.go:141] libmachine: Using SSH client type: native
	I0906 23:47:07.739827  133633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0906 23:47:07.739841  133633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 23:47:07.844411  133633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:47:07.844439  133633 main.go:141] libmachine: Detecting the provisioner...
	I0906 23:47:07.844451  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:07.847327  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.847637  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:07.847669  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.847831  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:07.848051  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:07.848201  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:07.848313  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:07.848421  133633 main.go:141] libmachine: Using SSH client type: native
	I0906 23:47:07.848616  133633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0906 23:47:07.848625  133633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 23:47:07.954285  133633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0906 23:47:07.954411  133633 main.go:141] libmachine: found compatible host: buildroot
	I0906 23:47:07.954426  133633 main.go:141] libmachine: Provisioning with buildroot...
	I0906 23:47:07.954440  133633 main.go:141] libmachine: (addons-331285) Calling .GetMachineName
	I0906 23:47:07.954714  133633 buildroot.go:166] provisioning hostname "addons-331285"
	I0906 23:47:07.954746  133633 main.go:141] libmachine: (addons-331285) Calling .GetMachineName
	I0906 23:47:07.955001  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:07.957503  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.957819  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:07.957848  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:07.957964  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:07.958142  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:07.958286  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:07.958456  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:07.958618  133633 main.go:141] libmachine: Using SSH client type: native
	I0906 23:47:07.958812  133633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0906 23:47:07.958825  133633 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-331285 && echo "addons-331285" | sudo tee /etc/hostname
	I0906 23:47:08.083464  133633 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-331285
	
	I0906 23:47:08.083503  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:08.086689  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.086958  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.086987  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.087148  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:08.087381  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.087530  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.087692  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:08.087861  133633 main.go:141] libmachine: Using SSH client type: native
	I0906 23:47:08.088123  133633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0906 23:47:08.088148  133633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-331285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-331285/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-331285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 23:47:08.204812  133633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 23:47:08.204857  133633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21132-128697/.minikube CaCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21132-128697/.minikube}
	I0906 23:47:08.204895  133633 buildroot.go:174] setting up certificates
	I0906 23:47:08.204915  133633 provision.go:84] configureAuth start
	I0906 23:47:08.204937  133633 main.go:141] libmachine: (addons-331285) Calling .GetMachineName
	I0906 23:47:08.205293  133633 main.go:141] libmachine: (addons-331285) Calling .GetIP
	I0906 23:47:08.208358  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.208679  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.208703  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.208854  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:08.210764  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.211068  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.211096  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.211270  133633 provision.go:143] copyHostCerts
	I0906 23:47:08.211362  133633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/cert.pem (1123 bytes)
	I0906 23:47:08.211511  133633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/key.pem (1679 bytes)
	I0906 23:47:08.211583  133633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/ca.pem (1082 bytes)
	I0906 23:47:08.211647  133633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem org=jenkins.addons-331285 san=[127.0.0.1 192.168.39.179 addons-331285 localhost minikube]
	I0906 23:47:08.385507  133633 provision.go:177] copyRemoteCerts
	I0906 23:47:08.385579  133633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 23:47:08.385607  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:08.388420  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.388858  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.388889  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.389072  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:08.389277  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.389456  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:08.389630  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:08.475491  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 23:47:08.509180  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 23:47:08.543222  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 23:47:08.576473  133633 provision.go:87] duration metric: took 371.534866ms to configureAuth
	I0906 23:47:08.576505  133633 buildroot.go:189] setting minikube options for container-runtime
	I0906 23:47:08.576762  133633 config.go:182] Loaded profile config "addons-331285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0906 23:47:08.576954  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:08.579737  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.580051  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.580094  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.580269  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:08.580499  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.580685  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.580875  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:08.581055  133633 main.go:141] libmachine: Using SSH client type: native
	I0906 23:47:08.581277  133633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0906 23:47:08.581292  133633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 23:47:08.849627  133633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 23:47:08.849659  133633 main.go:141] libmachine: Checking connection to Docker...
	I0906 23:47:08.849667  133633 main.go:141] libmachine: (addons-331285) Calling .GetURL
	I0906 23:47:08.851162  133633 main.go:141] libmachine: (addons-331285) DBG | using libvirt version 6000000
	I0906 23:47:08.853522  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.853938  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.853977  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.854176  133633 main.go:141] libmachine: Docker is up and running!
	I0906 23:47:08.854192  133633 main.go:141] libmachine: Reticulating splines...
	I0906 23:47:08.854201  133633 client.go:171] duration metric: took 23.896924722s to LocalClient.Create
	I0906 23:47:08.854229  133633 start.go:167] duration metric: took 23.89699942s to libmachine.API.Create "addons-331285"
	I0906 23:47:08.854244  133633 start.go:293] postStartSetup for "addons-331285" (driver="kvm2")
	I0906 23:47:08.854254  133633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 23:47:08.854276  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:08.854540  133633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 23:47:08.854569  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:08.856987  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.857356  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.857389  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.857542  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:08.857757  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.857930  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:08.858100  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:08.946103  133633 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 23:47:08.951280  133633 info.go:137] Remote host: Buildroot 2025.02
	I0906 23:47:08.951313  133633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-128697/.minikube/addons for local assets ...
	I0906 23:47:08.951392  133633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-128697/.minikube/files for local assets ...
	I0906 23:47:08.951426  133633 start.go:296] duration metric: took 97.176352ms for postStartSetup
	I0906 23:47:08.951467  133633 main.go:141] libmachine: (addons-331285) Calling .GetConfigRaw
	I0906 23:47:08.952153  133633 main.go:141] libmachine: (addons-331285) Calling .GetIP
	I0906 23:47:08.954671  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.955026  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.955055  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.955324  133633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/config.json ...
	I0906 23:47:08.955511  133633 start.go:128] duration metric: took 24.018309526s to createHost
	I0906 23:47:08.955537  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:08.958011  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.958333  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:08.958374  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:08.958536  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:08.958722  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.958892  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:08.959006  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:08.959157  133633 main.go:141] libmachine: Using SSH client type: native
	I0906 23:47:08.959372  133633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0906 23:47:08.959383  133633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 23:47:09.066545  133633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757202429.036942934
	
	I0906 23:47:09.066585  133633 fix.go:216] guest clock: 1757202429.036942934
	I0906 23:47:09.066597  133633 fix.go:229] Guest: 2025-09-06 23:47:09.036942934 +0000 UTC Remote: 2025-09-06 23:47:08.955524313 +0000 UTC m=+24.281431545 (delta=81.418621ms)
	I0906 23:47:09.066650  133633 fix.go:200] guest clock delta is within tolerance: 81.418621ms
	I0906 23:47:09.066665  133633 start.go:83] releasing machines lock for "addons-331285", held for 24.129562833s
	I0906 23:47:09.066721  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:09.067066  133633 main.go:141] libmachine: (addons-331285) Calling .GetIP
	I0906 23:47:09.069987  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:09.070348  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:09.070373  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:09.070560  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:09.071147  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:09.071323  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:09.071450  133633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 23:47:09.071498  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:09.071533  133633 ssh_runner.go:195] Run: cat /version.json
	I0906 23:47:09.071553  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:09.074456  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:09.074510  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:09.074764  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:09.074795  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:09.074826  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:09.074843  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:09.074950  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:09.075098  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:09.075188  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:09.075274  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:09.075325  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:09.075420  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:09.075496  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:09.075585  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:09.154559  133633 ssh_runner.go:195] Run: systemctl --version
	I0906 23:47:09.184612  133633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 23:47:09.355321  133633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 23:47:09.363337  133633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 23:47:09.363425  133633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 23:47:09.386717  133633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 23:47:09.386752  133633 start.go:495] detecting cgroup driver to use...
	I0906 23:47:09.386838  133633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 23:47:09.410704  133633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 23:47:09.430401  133633 docker.go:218] disabling cri-docker service (if available) ...
	I0906 23:47:09.430468  133633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 23:47:09.448802  133633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 23:47:09.467632  133633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 23:47:09.622921  133633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 23:47:09.763962  133633 docker.go:234] disabling docker service ...
	I0906 23:47:09.764044  133633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 23:47:09.782487  133633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 23:47:09.798564  133633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 23:47:10.020073  133633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 23:47:10.166604  133633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 23:47:10.182477  133633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 23:47:10.206189  133633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0906 23:47:10.206257  133633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:47:10.218966  133633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 23:47:10.219046  133633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:47:10.231487  133633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:47:10.243339  133633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:47:10.255553  133633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 23:47:10.268447  133633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:47:10.280761  133633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:47:10.301370  133633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 23:47:10.313932  133633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 23:47:10.324918  133633 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 23:47:10.324995  133633 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 23:47:10.345658  133633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 23:47:10.358579  133633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 23:47:10.496818  133633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 23:47:10.613802  133633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 23:47:10.613904  133633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 23:47:10.620960  133633 start.go:563] Will wait 60s for crictl version
	I0906 23:47:10.621051  133633 ssh_runner.go:195] Run: which crictl
	I0906 23:47:10.625708  133633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 23:47:10.671724  133633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 23:47:10.671851  133633 ssh_runner.go:195] Run: crio --version
	I0906 23:47:10.705432  133633 ssh_runner.go:195] Run: crio --version
	I0906 23:47:10.781674  133633 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0906 23:47:10.865851  133633 main.go:141] libmachine: (addons-331285) Calling .GetIP
	I0906 23:47:10.868803  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:10.869140  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:10.869169  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:10.869358  133633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 23:47:10.874738  133633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:47:10.891718  133633 kubeadm.go:875] updating cluster {Name:addons-331285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:addons-331285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 23:47:10.891839  133633 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0906 23:47:10.891900  133633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:47:10.929549  133633 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0906 23:47:10.929627  133633 ssh_runner.go:195] Run: which lz4
	I0906 23:47:10.934644  133633 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 23:47:10.940877  133633 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 23:47:10.940920  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0906 23:47:12.686371  133633 crio.go:462] duration metric: took 1.751765231s to copy over tarball
	I0906 23:47:12.686448  133633 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 23:47:14.520429  133633 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.833945252s)
	I0906 23:47:14.520464  133633 crio.go:469] duration metric: took 1.834059927s to extract the tarball
	I0906 23:47:14.520475  133633 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 23:47:14.562879  133633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 23:47:14.611610  133633 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 23:47:14.611643  133633 cache_images.go:85] Images are preloaded, skipping loading
	I0906 23:47:14.611652  133633 kubeadm.go:926] updating node { 192.168.39.179 8443 v1.34.0 crio true true} ...
	I0906 23:47:14.611871  133633 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-331285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-331285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 23:47:14.611961  133633 ssh_runner.go:195] Run: crio config
	I0906 23:47:14.662052  133633 cni.go:84] Creating CNI manager for ""
	I0906 23:47:14.662090  133633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:47:14.662100  133633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 23:47:14.662126  133633 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.179 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-331285 NodeName:addons-331285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 23:47:14.662301  133633 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-331285"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 23:47:14.662386  133633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0906 23:47:14.676239  133633 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 23:47:14.676309  133633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 23:47:14.689558  133633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0906 23:47:14.712173  133633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 23:47:14.734525  133633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0906 23:47:14.756817  133633 ssh_runner.go:195] Run: grep 192.168.39.179	control-plane.minikube.internal$ /etc/hosts
	I0906 23:47:14.761415  133633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 23:47:14.777972  133633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 23:47:14.921945  133633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 23:47:14.960963  133633 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285 for IP: 192.168.39.179
	I0906 23:47:14.961000  133633 certs.go:194] generating shared ca certs ...
	I0906 23:47:14.961078  133633 certs.go:226] acquiring lock for ca certs: {Name:mk640ab940eb4d822d1f15a5cd2b466b6472cad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:14.961270  133633 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key
	I0906 23:47:15.665891  133633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt ...
	I0906 23:47:15.665922  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt: {Name:mk93913bb2765d78f12ec724834cfb799400037c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:15.666925  133633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key ...
	I0906 23:47:15.666949  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key: {Name:mk65a66e7a27b2b1cc383e4f5836150e8458e59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:15.667647  133633 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key
	I0906 23:47:15.981543  133633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.crt ...
	I0906 23:47:15.981585  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.crt: {Name:mk838039112a35be416d0ccdcfe66b6889b06db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:15.981783  133633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key ...
	I0906 23:47:15.981803  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key: {Name:mk5a6b36318cec06dee11aa51793354c990891a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:15.981901  133633 certs.go:256] generating profile certs ...
	I0906 23:47:15.981982  133633 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.key
	I0906 23:47:15.982000  133633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt with IP's: []
	I0906 23:47:16.285442  133633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt ...
	I0906 23:47:16.285475  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: {Name:mkdfc8b741aef20fd39a10648a9a623b8ddb62ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:16.285668  133633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.key ...
	I0906 23:47:16.285682  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.key: {Name:mkc676a5861c45e912c78d051dbbe2fba164ef91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:16.285774  133633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.key.7f80f580
	I0906 23:47:16.285801  133633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.crt.7f80f580 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.179]
	I0906 23:47:16.445898  133633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.crt.7f80f580 ...
	I0906 23:47:16.445933  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.crt.7f80f580: {Name:mkc5786adf82fc46751ec401b894dade36603f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:16.446119  133633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.key.7f80f580 ...
	I0906 23:47:16.446140  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.key.7f80f580: {Name:mk7819da6bec2fb82fb80cad03806313bb8e9db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:16.446242  133633 certs.go:381] copying /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.crt.7f80f580 -> /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.crt
	I0906 23:47:16.446392  133633 certs.go:385] copying /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.key.7f80f580 -> /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.key
	I0906 23:47:16.446474  133633 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.key
	I0906 23:47:16.446499  133633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.crt with IP's: []
	I0906 23:47:16.832942  133633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.crt ...
	I0906 23:47:16.832981  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.crt: {Name:mk7378fa9b26f4b7eb790a22f0f6e953e2eed4eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:16.833247  133633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.key ...
	I0906 23:47:16.833270  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.key: {Name:mkd4246683eed8dba9a8a45bf6831ead3f87c322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:16.833522  133633 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 23:47:16.833562  133633 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem (1082 bytes)
	I0906 23:47:16.833593  133633 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem (1123 bytes)
	I0906 23:47:16.833628  133633 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem (1679 bytes)
	I0906 23:47:16.834424  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 23:47:16.870690  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 23:47:16.903199  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 23:47:16.933817  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 23:47:16.965935  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 23:47:16.997601  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 23:47:17.030387  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 23:47:17.068354  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 23:47:17.103337  133633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 23:47:17.136184  133633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 23:47:17.159247  133633 ssh_runner.go:195] Run: openssl version
	I0906 23:47:17.167396  133633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 23:47:17.183193  133633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:47:17.189418  133633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:47:17.189495  133633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 23:47:17.198655  133633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 23:47:17.213900  133633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 23:47:17.219387  133633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 23:47:17.219450  133633 kubeadm.go:392] StartCluster: {Name:addons-331285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 C
lusterName:addons-331285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 23:47:17.219524  133633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 23:47:17.219631  133633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 23:47:17.264187  133633 cri.go:89] found id: ""
	I0906 23:47:17.264260  133633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 23:47:17.279477  133633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 23:47:17.293590  133633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 23:47:17.309113  133633 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 23:47:17.309144  133633 kubeadm.go:157] found existing configuration files:
	
	I0906 23:47:17.309193  133633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 23:47:17.321191  133633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 23:47:17.321260  133633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 23:47:17.337035  133633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 23:47:17.348417  133633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 23:47:17.348508  133633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 23:47:17.360205  133633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 23:47:17.371594  133633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 23:47:17.371656  133633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 23:47:17.383887  133633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 23:47:17.395766  133633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 23:47:17.395841  133633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 23:47:17.408190  133633 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 23:47:17.562405  133633 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 23:47:29.462529  133633 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0906 23:47:29.462615  133633 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 23:47:29.462721  133633 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 23:47:29.462846  133633 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 23:47:29.462993  133633 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 23:47:29.463099  133633 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 23:47:29.464497  133633 out.go:252]   - Generating certificates and keys ...
	I0906 23:47:29.464622  133633 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 23:47:29.464715  133633 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 23:47:29.464851  133633 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 23:47:29.464970  133633 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 23:47:29.465072  133633 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 23:47:29.465150  133633 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 23:47:29.465235  133633 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 23:47:29.465387  133633 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-331285 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0906 23:47:29.465474  133633 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 23:47:29.465668  133633 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-331285 localhost] and IPs [192.168.39.179 127.0.0.1 ::1]
	I0906 23:47:29.465761  133633 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 23:47:29.465857  133633 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 23:47:29.465920  133633 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 23:47:29.466012  133633 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 23:47:29.466093  133633 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 23:47:29.466176  133633 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 23:47:29.466252  133633 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 23:47:29.466359  133633 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 23:47:29.466443  133633 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 23:47:29.466581  133633 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 23:47:29.466674  133633 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 23:47:29.467898  133633 out.go:252]   - Booting up control plane ...
	I0906 23:47:29.468044  133633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 23:47:29.468124  133633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 23:47:29.468221  133633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 23:47:29.468359  133633 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 23:47:29.468485  133633 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0906 23:47:29.468612  133633 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0906 23:47:29.468723  133633 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 23:47:29.468802  133633 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 23:47:29.468993  133633 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 23:47:29.469140  133633 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 23:47:29.469193  133633 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001711907s
	I0906 23:47:29.469302  133633 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0906 23:47:29.469426  133633 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.179:8443/livez
	I0906 23:47:29.469561  133633 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0906 23:47:29.469678  133633 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0906 23:47:29.469790  133633 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.762819384s
	I0906 23:47:29.469885  133633 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.050051868s
	I0906 23:47:29.469983  133633 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001623263s
	I0906 23:47:29.470140  133633 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 23:47:29.470304  133633 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 23:47:29.470418  133633 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 23:47:29.470690  133633 kubeadm.go:310] [mark-control-plane] Marking the node addons-331285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 23:47:29.470761  133633 kubeadm.go:310] [bootstrap-token] Using token: bui58k.13c03q3gzd2n5wz1
	I0906 23:47:29.472118  133633 out.go:252]   - Configuring RBAC rules ...
	I0906 23:47:29.472242  133633 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 23:47:29.472351  133633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 23:47:29.472521  133633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 23:47:29.472653  133633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 23:47:29.472785  133633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 23:47:29.472915  133633 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 23:47:29.473072  133633 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 23:47:29.473119  133633 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 23:47:29.473160  133633 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 23:47:29.473166  133633 kubeadm.go:310] 
	I0906 23:47:29.473216  133633 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 23:47:29.473222  133633 kubeadm.go:310] 
	I0906 23:47:29.473286  133633 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 23:47:29.473292  133633 kubeadm.go:310] 
	I0906 23:47:29.473326  133633 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 23:47:29.473384  133633 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 23:47:29.473456  133633 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 23:47:29.473474  133633 kubeadm.go:310] 
	I0906 23:47:29.473530  133633 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 23:47:29.473537  133633 kubeadm.go:310] 
	I0906 23:47:29.473591  133633 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 23:47:29.473606  133633 kubeadm.go:310] 
	I0906 23:47:29.473681  133633 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 23:47:29.473799  133633 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 23:47:29.473901  133633 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 23:47:29.473910  133633 kubeadm.go:310] 
	I0906 23:47:29.474014  133633 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 23:47:29.474105  133633 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 23:47:29.474118  133633 kubeadm.go:310] 
	I0906 23:47:29.474224  133633 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bui58k.13c03q3gzd2n5wz1 \
	I0906 23:47:29.474364  133633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e470780e4a0612a57e93aae22b2fa0b73368719f2f3ed46b601ecb2088a612a \
	I0906 23:47:29.474396  133633 kubeadm.go:310] 	--control-plane 
	I0906 23:47:29.474403  133633 kubeadm.go:310] 
	I0906 23:47:29.474501  133633 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 23:47:29.474514  133633 kubeadm.go:310] 
	I0906 23:47:29.474617  133633 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bui58k.13c03q3gzd2n5wz1 \
	I0906 23:47:29.474775  133633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e470780e4a0612a57e93aae22b2fa0b73368719f2f3ed46b601ecb2088a612a 
	I0906 23:47:29.474801  133633 cni.go:84] Creating CNI manager for ""
	I0906 23:47:29.474815  133633 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:47:29.476325  133633 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 23:47:29.477630  133633 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 23:47:29.493599  133633 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 23:47:29.522479  133633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 23:47:29.522618  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:29.522618  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-331285 minikube.k8s.io/updated_at=2025_09_06T23_47_29_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d minikube.k8s.io/name=addons-331285 minikube.k8s.io/primary=true
	I0906 23:47:29.698281  133633 ops.go:34] apiserver oom_adj: -16
	I0906 23:47:29.698298  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:30.198856  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:30.699439  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:31.199272  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:31.699114  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:32.198453  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:32.699248  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:33.198714  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:33.698716  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:34.199415  133633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 23:47:34.355258  133633 kubeadm.go:1105] duration metric: took 4.83272479s to wait for elevateKubeSystemPrivileges
	I0906 23:47:34.355299  133633 kubeadm.go:394] duration metric: took 17.135856728s to StartCluster
	I0906 23:47:34.355320  133633 settings.go:142] acquiring lock: {Name:mkd1edfb540d79a9fb2ef8a25e6ffcf2ec0c7ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:34.355450  133633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0906 23:47:34.355872  133633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/kubeconfig: {Name:mk63d1fc2221fbf03163b06fbb544f3ee799299f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:47:34.356104  133633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 23:47:34.356173  133633 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 23:47:34.356259  133633 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 23:47:34.356452  133633 config.go:182] Loaded profile config "addons-331285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0906 23:47:34.356474  133633 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-331285"
	I0906 23:47:34.356492  133633 addons.go:69] Setting yakd=true in profile "addons-331285"
	I0906 23:47:34.356522  133633 addons.go:238] Setting addon yakd=true in "addons-331285"
	I0906 23:47:34.356526  133633 addons.go:69] Setting volcano=true in profile "addons-331285"
	I0906 23:47:34.356514  133633 addons.go:69] Setting metrics-server=true in profile "addons-331285"
	I0906 23:47:34.356542  133633 addons.go:238] Setting addon volcano=true in "addons-331285"
	I0906 23:47:34.356576  133633 addons.go:238] Setting addon metrics-server=true in "addons-331285"
	I0906 23:47:34.356584  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.356592  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.356596  133633 addons.go:69] Setting storage-provisioner=true in profile "addons-331285"
	I0906 23:47:34.356653  133633 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-331285"
	I0906 23:47:34.356657  133633 addons.go:69] Setting gcp-auth=true in profile "addons-331285"
	I0906 23:47:34.356677  133633 addons.go:238] Setting addon storage-provisioner=true in "addons-331285"
	I0906 23:47:34.356680  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.356700  133633 mustload.go:65] Loading cluster: addons-331285
	I0906 23:47:34.356610  133633 addons.go:69] Setting volumesnapshots=true in profile "addons-331285"
	I0906 23:47:34.356724  133633 addons.go:238] Setting addon volumesnapshots=true in "addons-331285"
	I0906 23:47:34.356787  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.356935  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.357073  133633 addons.go:69] Setting registry=true in profile "addons-331285"
	I0906 23:47:34.357102  133633 addons.go:69] Setting ingress=true in profile "addons-331285"
	I0906 23:47:34.357122  133633 addons.go:238] Setting addon ingress=true in "addons-331285"
	I0906 23:47:34.357125  133633 addons.go:238] Setting addon registry=true in "addons-331285"
	I0906 23:47:34.357149  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.357151  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.357160  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357200  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.356649  133633 addons.go:69] Setting default-storageclass=true in profile "addons-331285"
	I0906 23:47:34.357284  133633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-331285"
	I0906 23:47:34.357292  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357298  133633 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-331285"
	I0906 23:47:34.357318  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.357329  133633 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-331285"
	I0906 23:47:34.357342  133633 addons.go:69] Setting cloud-spanner=true in profile "addons-331285"
	I0906 23:47:34.357365  133633 addons.go:238] Setting addon cloud-spanner=true in "addons-331285"
	I0906 23:47:34.357374  133633 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-331285"
	I0906 23:47:34.357381  133633 addons.go:69] Setting ingress-dns=true in profile "addons-331285"
	I0906 23:47:34.357391  133633 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-331285"
	I0906 23:47:34.357393  133633 addons.go:238] Setting addon ingress-dns=true in "addons-331285"
	I0906 23:47:34.357405  133633 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-331285"
	I0906 23:47:34.357415  133633 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-331285"
	I0906 23:47:34.356639  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.357446  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.357458  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357495  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.357552  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357575  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.357598  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357632  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.357652  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357678  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.357688  133633 addons.go:69] Setting inspektor-gadget=true in profile "addons-331285"
	I0906 23:47:34.357699  133633 addons.go:238] Setting addon inspektor-gadget=true in "addons-331285"
	I0906 23:47:34.357070  133633 config.go:182] Loaded profile config "addons-331285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0906 23:47:34.357810  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357843  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.357894  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357845  133633 addons.go:69] Setting registry-creds=true in profile "addons-331285"
	I0906 23:47:34.357913  133633 addons.go:238] Setting addon registry-creds=true in "addons-331285"
	I0906 23:47:34.357927  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.357987  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.357994  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.358010  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.358071  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.358147  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.358078  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.358237  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.358328  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.358379  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.358397  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.358435  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.358718  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.359107  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.359111  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.359265  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.359535  133633 out.go:179] * Verifying Kubernetes components...
	I0906 23:47:34.362659  133633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 23:47:34.379883  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35449
	I0906 23:47:34.379906  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0906 23:47:34.380093  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0906 23:47:34.380596  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.380760  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.380987  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.381216  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.381231  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.381349  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.381373  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.381637  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.382224  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.382266  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.382388  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.382408  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.382500  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0906 23:47:34.389107  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.389169  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.389310  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.389340  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.389451  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.389496  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.389874  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.389914  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.397824  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.397985  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
	I0906 23:47:34.398119  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0906 23:47:34.398187  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0906 23:47:34.398424  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.398525  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.398573  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.398756  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.399273  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.399318  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.399543  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.399707  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.399726  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.399892  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.399905  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.399905  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.399920  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.399985  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.400042  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.400673  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.400726  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.400776  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.401397  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.401438  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.401697  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.401737  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.402285  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.402316  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.402663  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.402708  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.403156  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.403349  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.405241  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.405647  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.405692  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.431142  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42575
	I0906 23:47:34.431775  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.432471  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.432495  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.432932  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.433562  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.433605  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.433824  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42553
	I0906 23:47:34.433951  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0906 23:47:34.435475  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0906 23:47:34.435648  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I0906 23:47:34.435958  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.436328  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.436517  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.436532  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.436737  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.436956  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.437441  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.437462  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.437961  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.437995  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.438389  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.438411  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.439028  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.439637  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.439715  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.440016  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0906 23:47:34.440510  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.440602  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.440818  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.441048  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.441071  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.441534  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.441773  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.441826  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.442319  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.442337  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.443388  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.444861  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.446790  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41349
	I0906 23:47:34.447013  133633 addons.go:238] Setting addon default-storageclass=true in "addons-331285"
	I0906 23:47:34.447062  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.447445  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.447460  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.447483  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.447964  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.447982  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.448343  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.448532  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.449160  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.449405  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:34.449416  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:34.451655  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.451713  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:34.451752  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:34.451758  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:34.451767  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:34.451773  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:34.452227  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:34.452269  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:34.452276  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 23:47:34.452418  133633 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0906 23:47:34.454747  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0906 23:47:34.455418  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39603
	I0906 23:47:34.455445  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.456896  133633 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-331285"
	I0906 23:47:34.456944  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:34.457363  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.457416  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.457932  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I0906 23:47:34.458072  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.458270  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.458284  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.458660  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.458870  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 23:47:34.459246  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.459291  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.459516  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.460104  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0906 23:47:34.460205  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.460222  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.460241  133633 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 23:47:34.460253  133633 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 23:47:34.460274  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.460390  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.460407  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.460856  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.461464  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.461503  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.462053  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.462404  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34119
	I0906 23:47:34.462681  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.462696  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.462840  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.463044  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.463392  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.463530  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.463550  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.463605  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.464171  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.464208  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.464410  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.464463  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.464525  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.464545  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.464717  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.464798  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.465528  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.465854  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.466310  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.466753  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.467686  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.468406  133633 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0906 23:47:34.469010  133633 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0906 23:47:34.470148  133633 out.go:179]   - Using image docker.io/registry:3.0.0
	I0906 23:47:34.471066  133633 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0906 23:47:34.471211  133633 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 23:47:34.471233  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 23:47:34.471257  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.471598  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I0906 23:47:34.472398  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.473577  133633 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0906 23:47:34.473600  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.473628  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.473702  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I0906 23:47:34.474414  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.474619  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.474826  133633 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 23:47:34.474845  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 23:47:34.474865  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.475568  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.476173  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.476196  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.476596  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.477326  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.477372  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.479903  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45073
	I0906 23:47:34.479903  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.479945  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.479964  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.479981  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.480013  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.480201  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.480362  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.480412  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.480425  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.480479  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.480681  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.480791  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.480982  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.481153  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.481314  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.481773  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.483063  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.483087  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.483148  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0906 23:47:34.483349  133633 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0906 23:47:34.483482  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.484129  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.484171  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.484391  133633 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0906 23:47:34.484411  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0906 23:47:34.484433  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.484676  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.485214  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.485231  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.485636  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.485845  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.487356  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.488209  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35211
	I0906 23:47:34.488862  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.489125  133633 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0906 23:47:34.489727  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.489756  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.490249  133633 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 23:47:34.490264  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 23:47:34.490283  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.490872  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.491278  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.493451  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.494019  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.494048  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.494311  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.494523  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.494798  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.495035  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.496122  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.496625  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.496644  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.496887  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.497101  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.497248  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.497382  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.498257  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.499776  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 23:47:34.500892  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 23:47:34.501265  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44749
	I0906 23:47:34.501515  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I0906 23:47:34.501663  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0906 23:47:34.503620  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 23:47:34.504704  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 23:47:34.504876  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
	I0906 23:47:34.504999  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.505471  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.505576  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.505642  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.506450  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 23:47:34.507254  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 23:47:34.507855  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0906 23:47:34.507925  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.507964  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.507930  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.508088  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.508128  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.508136  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.508145  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.508145  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.509177  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 23:47:34.509268  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.509320  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.509325  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.509381  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.509427  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.509635  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.509700  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.509752  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.510385  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.511498  133633 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 23:47:34.512245  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.512283  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.512392  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0906 23:47:34.512417  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.512395  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.512504  133633 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 23:47:34.512522  133633 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 23:47:34.512553  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.512828  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.513088  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.513394  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.513421  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.513482  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.513836  133633 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0906 23:47:34.513872  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.513983  133633 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0906 23:47:34.514688  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.514747  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.514887  133633 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 23:47:34.514901  133633 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 23:47:34.514931  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.516078  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.516084  133633 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0906 23:47:34.516096  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 23:47:34.516112  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.516170  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.516842  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.516916  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.517072  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0906 23:47:34.517564  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.517846  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.517855  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.517896  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.517936  133633 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0906 23:47:34.517933  133633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 23:47:34.518168  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.518578  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.518770  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.518782  133633 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 23:47:34.519219  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.519496  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.519523  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.519620  133633 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:47:34.519671  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 23:47:34.519689  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.519865  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.519893  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.520053  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.520126  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.520202  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.520219  133633 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 23:47:34.520233  133633 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 23:47:34.520243  133633 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0906 23:47:34.520250  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.520255  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0906 23:47:34.520277  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.520391  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.520914  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.521026  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.521152  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.521905  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.521940  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.522325  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.522586  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.522780  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.522934  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.524170  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.524229  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.524432  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41097
	I0906 23:47:34.524779  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.524808  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.524958  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.525160  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.525314  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.525343  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.525476  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.525738  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.525760  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.525828  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.525886  133633 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0906 23:47:34.525890  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.526066  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.526181  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.526314  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.526370  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.526453  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.526812  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.526829  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.526998  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.527099  133633 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 23:47:34.527129  133633 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0906 23:47:34.527191  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.527151  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.527561  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.527588  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.527932  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.528205  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.528880  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:34.528939  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:34.530277  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.530669  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.530690  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.530904  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.531085  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.531251  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.531273  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35703
	I0906 23:47:34.531464  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.531595  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.531904  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.531922  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.532128  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40617
	I0906 23:47:34.532165  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.532264  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.532829  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.533349  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.533368  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.533768  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.534006  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.535522  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.537182  133633 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0906 23:47:34.538357  133633 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 23:47:34.538376  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (3051 bytes)
	I0906 23:47:34.538399  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.543657  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.543692  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0906 23:47:34.543656  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.543754  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.543778  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.543933  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.544120  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.544194  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.544244  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.544767  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.544798  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.545381  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.545619  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.547085  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.547375  133633 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 23:47:34.547425  133633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 23:47:34.547459  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.549745  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41445
	I0906 23:47:34.549955  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.550319  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.550344  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.550380  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:34.550485  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.550643  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.550753  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.550830  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:34.550887  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:34.550904  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:34.551404  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:34.551645  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:34.553185  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:34.554828  133633 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 23:47:34.555948  133633 out.go:179]   - Using image docker.io/busybox:stable
	I0906 23:47:34.556786  133633 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 23:47:34.556807  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 23:47:34.556825  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:34.559734  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.560110  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:34.560127  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:34.560333  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:34.560509  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:34.560644  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:34.560738  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:35.272186  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 23:47:35.312072  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0906 23:47:35.461794  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 23:47:35.551266  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 23:47:35.586945  133633 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 23:47:35.586983  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 23:47:35.604486  133633 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 23:47:35.604512  133633 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 23:47:35.625727  133633 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 23:47:35.625765  133633 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 23:47:35.678708  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 23:47:35.686793  133633 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 23:47:35.686813  133633 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 23:47:35.787925  133633 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:35.787962  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0906 23:47:35.822115  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0906 23:47:35.921247  133633 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.565103598s)
	I0906 23:47:35.921288  133633 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.558543097s)
	I0906 23:47:35.921365  133633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 23:47:35.921443  133633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 23:47:35.963198  133633 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 23:47:35.963230  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 23:47:36.033485  133633 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 23:47:36.033523  133633 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 23:47:36.169586  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 23:47:36.206295  133633 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 23:47:36.206348  133633 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 23:47:36.208271  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 23:47:36.252826  133633 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 23:47:36.252868  133633 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 23:47:36.301099  133633 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 23:47:36.301140  133633 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 23:47:36.303276  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 23:47:36.396049  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:36.404849  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 23:47:36.592889  133633 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 23:47:36.592916  133633 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 23:47:36.616327  133633 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 23:47:36.616370  133633 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 23:47:36.690296  133633 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 23:47:36.690349  133633 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 23:47:36.725078  133633 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 23:47:36.725113  133633 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 23:47:36.963125  133633 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 23:47:36.963166  133633 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 23:47:36.984190  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 23:47:37.089596  133633 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 23:47:37.089645  133633 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 23:47:37.152181  133633 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 23:47:37.152224  133633 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 23:47:37.345542  133633 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 23:47:37.345573  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 23:47:37.573872  133633 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 23:47:37.573911  133633 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 23:47:37.583858  133633 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:47:37.583885  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 23:47:37.701022  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 23:47:37.964767  133633 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 23:47:37.964801  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 23:47:38.050966  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:47:38.446772  133633 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 23:47:38.446808  133633 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 23:47:38.915320  133633 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 23:47:38.915345  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 23:47:39.380202  133633 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 23:47:39.380232  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 23:47:39.940899  133633 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 23:47:39.940940  133633 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 23:47:40.319326  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 23:47:41.985006  133633 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 23:47:41.985051  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:41.988807  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:41.989364  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:41.989394  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:41.989597  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:41.989803  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:41.990009  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:41.990245  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:42.518648  133633 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 23:47:42.851138  133633 addons.go:238] Setting addon gcp-auth=true in "addons-331285"
	I0906 23:47:42.851209  133633 host.go:66] Checking if "addons-331285" exists ...
	I0906 23:47:42.851525  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:42.851563  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:42.867660  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0906 23:47:42.868171  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:42.868677  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:42.868702  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:42.869052  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:42.869708  133633 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:47:42.869754  133633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:47:42.886454  133633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0906 23:47:42.887012  133633 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:47:42.887636  133633 main.go:141] libmachine: Using API Version  1
	I0906 23:47:42.887665  133633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:47:42.888121  133633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:47:42.888324  133633 main.go:141] libmachine: (addons-331285) Calling .GetState
	I0906 23:47:42.890307  133633 main.go:141] libmachine: (addons-331285) Calling .DriverName
	I0906 23:47:42.890548  133633 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 23:47:42.890571  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHHostname
	I0906 23:47:42.893493  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:42.893915  133633 main.go:141] libmachine: (addons-331285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b8:ba", ip: ""} in network mk-addons-331285: {Iface:virbr1 ExpiryTime:2025-09-07 00:47:00 +0000 UTC Type:0 Mac:52:54:00:75:b8:ba Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:addons-331285 Clientid:01:52:54:00:75:b8:ba}
	I0906 23:47:42.893942  133633 main.go:141] libmachine: (addons-331285) DBG | domain addons-331285 has defined IP address 192.168.39.179 and MAC address 52:54:00:75:b8:ba in network mk-addons-331285
	I0906 23:47:42.894097  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHPort
	I0906 23:47:42.894296  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHKeyPath
	I0906 23:47:42.894475  133633 main.go:141] libmachine: (addons-331285) Calling .GetSSHUsername
	I0906 23:47:42.894627  133633 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/addons-331285/id_rsa Username:docker}
	I0906 23:47:43.836034  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.563803244s)
	I0906 23:47:43.836089  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836103  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836120  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.524012572s)
	I0906 23:47:43.836171  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836194  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836218  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.374389081s)
	I0906 23:47:43.836259  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836269  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836271  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.284971924s)
	I0906 23:47:43.836326  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836326  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.157593957s)
	I0906 23:47:43.836338  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836351  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836360  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836361  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.014220322s)
	I0906 23:47:43.836394  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836403  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836440  133633 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.914977164s)
	I0906 23:47:43.836455  133633 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.915077895s)
	I0906 23:47:43.836457  133633 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 23:47:43.836553  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.836555  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.836573  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.836575  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.836582  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836590  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836595  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.836603  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.836610  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836616  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836683  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.836765  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.667127141s)
	I0906 23:47:43.836792  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836793  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.836800  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836807  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.836825  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836832  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.836858  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.836868  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.836889  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.628583877s)
	I0906 23:47:43.836906  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.836914  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.837009  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.533699997s)
	I0906 23:47:43.837071  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.837079  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.837089  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.837102  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.837127  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.837146  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.837152  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.837195  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.837200  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.441120231s)
	I0906 23:47:43.837217  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.837228  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 23:47:43.837232  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:43.837287  133633 retry.go:31] will retry after 185.705219ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:43.837236  133633 addons.go:479] Verifying addon ingress=true in "addons-331285"
	I0906 23:47:43.837431  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.837466  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.837472  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.837480  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.837489  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.837726  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.837763  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.837771  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.839429  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.839445  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.839453  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.839460  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.839511  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.839530  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.839536  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.839627  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.839635  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.841434  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.841468  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.841475  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.842781  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.842816  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.842825  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.842834  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.842835  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.842832  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.437928002s)
	I0906 23:47:43.842862  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.842869  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.842876  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.842873  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.842883  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.842888  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.842843  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.842946  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.842942  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.858706133s)
	I0906 23:47:43.837386  133633 node_ready.go:35] waiting up to 6m0s for node "addons-331285" to be "Ready" ...
	I0906 23:47:43.842999  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.843013  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.843016  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.843028  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.843036  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.843042  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.843010  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.141953987s)
	I0906 23:47:43.843077  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.843084  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.843629  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.843667  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.843692  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.843713  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.843718  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.843725  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.843736  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.843741  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.843749  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.843726  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.843798  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.843810  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.843860  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.843879  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.843886  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.843893  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.843899  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.844015  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.844048  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.844054  133633 out.go:179] * Verifying ingress addon...
	I0906 23:47:43.844060  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.844098  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.844112  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.844054  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.845265  133633 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-331285 service yakd-dashboard -n yakd-dashboard
	
	I0906 23:47:43.845808  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.845852  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.845858  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.845869  133633 addons.go:479] Verifying addon registry=true in "addons-331285"
	I0906 23:47:43.846111  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.846152  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.846159  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.846385  133633 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 23:47:43.847257  133633 out.go:179] * Verifying registry addon...
	I0906 23:47:43.847378  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.847382  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:43.847391  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:43.847401  133633 addons.go:479] Verifying addon metrics-server=true in "addons-331285"
	I0906 23:47:43.849029  133633 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 23:47:43.912377  133633 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 23:47:43.912404  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:43.912607  133633 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 23:47:43.912626  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:43.913262  133633 node_ready.go:49] node "addons-331285" is "Ready"
	I0906 23:47:43.913294  133633 node_ready.go:38] duration metric: took 70.300615ms for node "addons-331285" to be "Ready" ...
	I0906 23:47:43.913312  133633 api_server.go:52] waiting for apiserver process to appear ...
	I0906 23:47:43.913380  133633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 23:47:43.962759  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:43.962780  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:43.963122  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:43.963145  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 23:47:43.963254  133633 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0906 23:47:44.023889  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:44.095163  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:44.095187  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:44.095549  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:44.095575  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:44.420972  133633 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-331285" context rescaled to 1 replicas
	I0906 23:47:44.422882  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:44.423549  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:44.684053  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.633029956s)
	W0906 23:47:44.684117  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 23:47:44.684148  133633 retry.go:31] will retry after 347.551764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 23:47:44.874477  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:44.874600  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:45.032870  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 23:47:45.362807  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:45.368591  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:45.901626  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:45.901977  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:46.042958  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.723561303s)
	I0906 23:47:46.042984  133633 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.152409051s)
	I0906 23:47:46.043032  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:46.043051  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:46.043059  133633 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.129654254s)
	I0906 23:47:46.043089  133633 api_server.go:72] duration metric: took 11.68688026s to wait for apiserver process to appear ...
	I0906 23:47:46.043190  133633 api_server.go:88] waiting for apiserver healthz status ...
	I0906 23:47:46.043219  133633 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I0906 23:47:46.043542  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:46.043566  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:46.043577  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:46.043586  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:46.043595  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:46.043834  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:46.043832  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:46.043857  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:46.043868  133633 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-331285"
	I0906 23:47:46.044528  133633 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0906 23:47:46.045178  133633 out.go:179] * Verifying csi-hostpath-driver addon...
	I0906 23:47:46.046391  133633 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0906 23:47:46.047091  133633 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 23:47:46.047525  133633 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 23:47:46.047565  133633 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 23:47:46.061524  133633 api_server.go:279] https://192.168.39.179:8443/healthz returned 200:
	ok
	I0906 23:47:46.062721  133633 api_server.go:141] control plane version: v1.34.0
	I0906 23:47:46.062791  133633 api_server.go:131] duration metric: took 19.588997ms to wait for apiserver health ...
	I0906 23:47:46.062806  133633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 23:47:46.094840  133633 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 23:47:46.094885  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:46.095772  133633 system_pods.go:59] 20 kube-system pods found
	I0906 23:47:46.095827  133633 system_pods.go:61] "amd-gpu-device-plugin-z9zkw" [bc981e1c-85ab-43e2-b105-c233dd666280] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0906 23:47:46.095837  133633 system_pods.go:61] "coredns-66bc5c9577-67v9n" [4cc0933b-2694-461e-bbbf-b800563b3faa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:47:46.095851  133633 system_pods.go:61] "coredns-66bc5c9577-jgp2g" [2abc90c3-9331-41ad-9de0-ff0cb7934ffe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:47:46.095860  133633 system_pods.go:61] "csi-hostpath-attacher-0" [688cca16-8b75-4b42-930c-ff16f46d0d9c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 23:47:46.095866  133633 system_pods.go:61] "csi-hostpath-resizer-0" [2c45274e-f9f5-4863-b335-abf2cc417540] Pending
	I0906 23:47:46.095875  133633 system_pods.go:61] "csi-hostpathplugin-grktz" [0751a51e-1915-49b8-a7c9-283777f55c29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 23:47:46.095879  133633 system_pods.go:61] "etcd-addons-331285" [1b88ad37-058c-4b98-8535-fb6f7346df18] Running
	I0906 23:47:46.095885  133633 system_pods.go:61] "kube-apiserver-addons-331285" [e393d1b8-4d0c-4071-b649-32b8f5a98de1] Running
	I0906 23:47:46.095890  133633 system_pods.go:61] "kube-controller-manager-addons-331285" [dbc0bd3d-cc03-4554-a8bb-45dc5791213d] Running
	I0906 23:47:46.095901  133633 system_pods.go:61] "kube-ingress-dns-minikube" [00a8663e-cf04-4ab4-b1f1-e3ce8ece965a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 23:47:46.095910  133633 system_pods.go:61] "kube-proxy-rlcmf" [f20e0c53-acd0-4280-b2b7-5de6601d6ece] Running
	I0906 23:47:46.095914  133633 system_pods.go:61] "kube-scheduler-addons-331285" [066c0d40-887a-42f9-9ac5-a12133acb55d] Running
	I0906 23:47:46.095919  133633 system_pods.go:61] "metrics-server-85b7d694d7-ngqvp" [75137e7c-be7f-4135-ae75-5dfa47025510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 23:47:46.095926  133633 system_pods.go:61] "nvidia-device-plugin-daemonset-xg7rw" [0d3facd1-ade2-448d-9450-22a49e7f155b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0906 23:47:46.095940  133633 system_pods.go:61] "registry-66898fdd98-rkfxb" [4a6ae9bf-8356-49dc-98bb-c032cbdfad51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 23:47:46.095945  133633 system_pods.go:61] "registry-creds-764b6fb674-mbvcf" [1dfc0afc-106c-497e-8a49-73e8946d6d8f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0906 23:47:46.095950  133633 system_pods.go:61] "registry-proxy-7478x" [6c5e8379-9df7-4f8e-8468-524608b8cb71] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 23:47:46.095956  133633 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8ktvk" [5c64ce64-5f08-4e50-bb2b-154b4527620f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:47:46.095967  133633 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wd5jr" [f7db3a79-7ef9-44b0-9dd4-ffaf23cacb52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:47:46.095977  133633 system_pods.go:61] "storage-provisioner" [ef7a7827-63f2-45d2-8b18-629c9a489e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:47:46.095989  133633 system_pods.go:74] duration metric: took 33.174868ms to wait for pod list to return data ...
	I0906 23:47:46.096003  133633 default_sa.go:34] waiting for default service account to be created ...
	I0906 23:47:46.169027  133633 default_sa.go:45] found service account: "default"
	I0906 23:47:46.169072  133633 default_sa.go:55] duration metric: took 73.055047ms for default service account to be created ...
	I0906 23:47:46.169091  133633 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 23:47:46.205251  133633 system_pods.go:86] 20 kube-system pods found
	I0906 23:47:46.205304  133633 system_pods.go:89] "amd-gpu-device-plugin-z9zkw" [bc981e1c-85ab-43e2-b105-c233dd666280] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0906 23:47:46.205315  133633 system_pods.go:89] "coredns-66bc5c9577-67v9n" [4cc0933b-2694-461e-bbbf-b800563b3faa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:47:46.205328  133633 system_pods.go:89] "coredns-66bc5c9577-jgp2g" [2abc90c3-9331-41ad-9de0-ff0cb7934ffe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 23:47:46.205345  133633 system_pods.go:89] "csi-hostpath-attacher-0" [688cca16-8b75-4b42-930c-ff16f46d0d9c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 23:47:46.205355  133633 system_pods.go:89] "csi-hostpath-resizer-0" [2c45274e-f9f5-4863-b335-abf2cc417540] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 23:47:46.205367  133633 system_pods.go:89] "csi-hostpathplugin-grktz" [0751a51e-1915-49b8-a7c9-283777f55c29] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 23:47:46.205377  133633 system_pods.go:89] "etcd-addons-331285" [1b88ad37-058c-4b98-8535-fb6f7346df18] Running
	I0906 23:47:46.205383  133633 system_pods.go:89] "kube-apiserver-addons-331285" [e393d1b8-4d0c-4071-b649-32b8f5a98de1] Running
	I0906 23:47:46.205391  133633 system_pods.go:89] "kube-controller-manager-addons-331285" [dbc0bd3d-cc03-4554-a8bb-45dc5791213d] Running
	I0906 23:47:46.205407  133633 system_pods.go:89] "kube-ingress-dns-minikube" [00a8663e-cf04-4ab4-b1f1-e3ce8ece965a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 23:47:46.205412  133633 system_pods.go:89] "kube-proxy-rlcmf" [f20e0c53-acd0-4280-b2b7-5de6601d6ece] Running
	I0906 23:47:46.205419  133633 system_pods.go:89] "kube-scheduler-addons-331285" [066c0d40-887a-42f9-9ac5-a12133acb55d] Running
	I0906 23:47:46.205426  133633 system_pods.go:89] "metrics-server-85b7d694d7-ngqvp" [75137e7c-be7f-4135-ae75-5dfa47025510] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 23:47:46.205437  133633 system_pods.go:89] "nvidia-device-plugin-daemonset-xg7rw" [0d3facd1-ade2-448d-9450-22a49e7f155b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0906 23:47:46.205447  133633 system_pods.go:89] "registry-66898fdd98-rkfxb" [4a6ae9bf-8356-49dc-98bb-c032cbdfad51] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 23:47:46.205458  133633 system_pods.go:89] "registry-creds-764b6fb674-mbvcf" [1dfc0afc-106c-497e-8a49-73e8946d6d8f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0906 23:47:46.205473  133633 system_pods.go:89] "registry-proxy-7478x" [6c5e8379-9df7-4f8e-8468-524608b8cb71] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 23:47:46.205484  133633 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8ktvk" [5c64ce64-5f08-4e50-bb2b-154b4527620f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:47:46.205494  133633 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wd5jr" [f7db3a79-7ef9-44b0-9dd4-ffaf23cacb52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 23:47:46.205512  133633 system_pods.go:89] "storage-provisioner" [ef7a7827-63f2-45d2-8b18-629c9a489e2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 23:47:46.205528  133633 system_pods.go:126] duration metric: took 36.427235ms to wait for k8s-apps to be running ...
	I0906 23:47:46.205543  133633 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 23:47:46.205612  133633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 23:47:46.353330  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:46.354118  133633 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 23:47:46.354138  133633 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 23:47:46.357223  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:46.525400  133633 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 23:47:46.525425  133633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 23:47:46.552999  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:46.664724  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 23:47:46.854269  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:46.856649  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:47.053192  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:47.355912  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:47.355918  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:47.556496  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:47.851689  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:47.854412  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.830484934s)
	W0906 23:47:47.854451  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:47.854473  133633 retry.go:31] will retry after 194.860608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:47.856187  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:47.885227  133633 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.679585114s)
	I0906 23:47:47.885271  133633 system_svc.go:56] duration metric: took 1.679723628s WaitForService to wait for kubelet
	I0906 23:47:47.885283  133633 kubeadm.go:578] duration metric: took 13.529074554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 23:47:47.885310  133633 node_conditions.go:102] verifying NodePressure condition ...
	I0906 23:47:47.885384  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.852459655s)
	I0906 23:47:47.885441  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:47.885463  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:47.885742  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:47.885812  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:47.885829  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:47.885838  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:47.886200  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:47.886228  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:47.886250  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:47.891700  133633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 23:47:47.891740  133633 node_conditions.go:123] node cpu capacity is 2
	I0906 23:47:47.891771  133633 node_conditions.go:105] duration metric: took 6.453399ms to run NodePressure ...
	I0906 23:47:47.891793  133633 start.go:241] waiting for startup goroutines ...
	I0906 23:47:48.050567  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:48.061434  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:48.395595  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:48.395637  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:48.553765  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.888986148s)
	I0906 23:47:48.553823  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:48.553841  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:48.554176  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:48.554200  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:48.554210  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:47:48.554220  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:47:48.554480  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:47:48.554530  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:47:48.554515  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:47:48.555504  133633 addons.go:479] Verifying addon gcp-auth=true in "addons-331285"
	I0906 23:47:48.557170  133633 out.go:179] * Verifying gcp-auth addon...
	I0906 23:47:48.558888  133633 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 23:47:48.585623  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:48.590824  133633 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 23:47:48.590858  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:48.854954  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:48.858146  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:49.054598  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:49.063485  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:49.355814  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:49.360712  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:49.558224  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:49.565589  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:49.855642  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:49.857057  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:49.945142  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.894528404s)
	W0906 23:47:49.945192  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:49.945214  133633 retry.go:31] will retry after 398.4636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:50.056632  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:50.064276  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:50.344638  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:50.356294  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:50.356544  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:50.555160  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:50.564088  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:50.854563  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:50.854568  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:51.053530  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:51.065741  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:51.357286  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:51.358111  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:51.552008  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:51.565213  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:51.741542  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.396851727s)
	W0906 23:47:51.741608  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:51.741641  133633 retry.go:31] will retry after 837.326885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:51.851319  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:51.853651  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:52.052665  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:52.064435  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:52.354041  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:52.354110  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:52.551796  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:52.567195  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:52.579382  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:52.853586  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:52.855104  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:53.053788  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:53.065771  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:53.354008  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:53.354988  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:53.554011  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:53.566659  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:53.779053  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.199616399s)
	W0906 23:47:53.779104  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:53.779148  133633 retry.go:31] will retry after 1.531771636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:53.854741  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:53.854877  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:54.053162  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:54.065306  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:54.353849  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:54.355146  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:54.555436  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:54.566369  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:54.855913  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:54.856349  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:55.055232  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:55.065475  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:55.311893  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:55.354030  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:55.355645  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:55.551893  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:55.567723  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:55.855024  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:55.858411  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:56.052695  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:56.066444  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:56.578940  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:56.580260  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:56.580388  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:56.580537  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:56.834508  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.522571274s)
	W0906 23:47:56.834555  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:56.834577  133633 retry.go:31] will retry after 1.494689464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:56.856674  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:56.858718  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:57.058683  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:57.066098  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:57.351533  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:57.356489  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:57.554713  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:57.565967  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:57.850560  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:57.852938  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:58.055569  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:58.065156  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:58.329428  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:47:58.351828  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:58.355971  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:58.557345  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:58.565641  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:59.284154  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:59.287022  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:59.287081  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:59.287480  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:59.385026  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:59.385224  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:59.555113  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:47:59.563935  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:47:59.850420  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:47:59.852257  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:47:59.965733  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.636248023s)
	W0906 23:47:59.965791  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:47:59.965817  133633 retry.go:31] will retry after 3.938287899s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:00.052147  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:00.066444  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:00.349919  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:00.353228  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:00.551896  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:00.563431  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:00.851834  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:00.856479  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:01.055164  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:01.076093  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:01.354749  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:01.356290  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:01.550879  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:01.563840  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:01.852182  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:01.855672  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:02.052538  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:02.062110  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:02.358513  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:02.358525  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:02.551767  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:02.563075  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:02.858589  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:02.858978  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:03.057223  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:03.071207  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:03.354124  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:03.354527  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:03.551818  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:03.566662  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:03.851483  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:03.853217  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:03.905311  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:48:04.051675  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:04.062106  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:04.355217  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:04.355306  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:04.555322  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:04.563519  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0906 23:48:04.661588  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:04.661638  133633 retry.go:31] will retry after 3.30677332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:04.851396  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:04.856671  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:05.053697  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:05.064464  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:05.349567  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:05.352392  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:05.553304  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:05.564447  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:05.853607  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:05.860197  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:06.052897  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:06.062836  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:06.351617  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:06.352348  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:06.556074  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:06.561232  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:06.857075  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:06.857136  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:07.061730  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:07.068217  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:07.353659  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:07.354090  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:07.553530  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:07.566063  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:07.853455  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:07.854440  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:07.968655  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:48:08.055552  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:08.062551  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:08.352263  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:08.355728  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:08.553410  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:08.562465  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:08.856202  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:08.856299  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:09.051177  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:09.058048  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.089338267s)
	W0906 23:48:09.058110  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:09.058139  133633 retry.go:31] will retry after 6.2139822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:09.063127  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:09.363238  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:09.363451  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:09.553905  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:09.563973  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:09.852879  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:09.853466  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:10.052844  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:10.065590  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:10.355197  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:10.355974  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:10.552254  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:10.563150  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:10.855628  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:10.855798  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:11.053215  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:11.064114  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:11.354784  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:11.357760  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:11.563970  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:11.564673  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:11.859520  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:11.860735  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:12.055083  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:12.068128  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:12.351691  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:12.354724  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:12.662714  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:12.665276  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:12.851171  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:12.853634  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:13.052864  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:13.062629  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:13.351129  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:13.354203  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:13.803446  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:13.805099  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:13.851951  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:13.855884  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:14.054052  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:14.063621  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:14.351749  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:14.355227  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:14.554038  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:14.564880  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:14.855356  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:14.857003  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:15.051589  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:15.066068  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:15.272348  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:48:15.352444  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:15.356185  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:15.553478  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:15.564473  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:15.854274  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:15.854791  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:16.052426  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:16.063165  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:16.319882  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.047486041s)
	W0906 23:48:16.319930  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:16.319953  133633 retry.go:31] will retry after 12.318470222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:16.356525  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:16.357285  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:16.553542  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:16.563601  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:16.854477  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:16.854644  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:17.051883  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:17.071170  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:17.352104  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:17.354439  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:17.642966  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:17.647385  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:17.852645  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:17.854900  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:18.054082  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:18.063500  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:18.352618  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:18.354849  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:18.551691  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:18.561753  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:18.853334  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:18.854837  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:19.052638  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:19.065329  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:19.354443  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:19.357475  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:19.554687  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:19.567207  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:19.850977  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:19.853912  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:20.054123  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:20.065336  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:20.349888  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:20.355373  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:20.552423  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:20.564521  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:20.851217  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:20.852879  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:21.051678  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:21.062699  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:21.350262  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:21.352105  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:21.551549  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:21.563153  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:21.852037  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:21.854512  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:22.051809  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:22.063472  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:22.356330  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:22.360018  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:22.555953  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:22.565241  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:22.851203  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:22.856353  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:23.055255  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:23.063309  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:23.351064  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:23.354564  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:23.552450  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:23.563795  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:23.854105  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:23.854410  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:24.054030  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:24.154058  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:24.353168  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:24.360886  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 23:48:24.550663  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:24.565122  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:24.857177  133633 kapi.go:107] duration metric: took 41.008141778s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 23:48:24.857668  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:25.052989  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:25.067058  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:25.351632  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:25.554001  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:25.562524  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:25.851878  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:26.054973  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:26.062737  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:26.352727  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:26.554457  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:26.565742  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:26.851605  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:27.051423  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:27.063186  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:27.351147  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:27.559529  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:27.563029  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:27.863449  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:28.063302  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:28.067578  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:28.351491  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:28.556545  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:28.567311  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:28.639540  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:48:28.860552  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:29.058759  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:29.066111  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:29.356470  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:29.551380  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:29.563683  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:29.852656  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:30.054247  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:30.063015  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:30.226166  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.586577221s)
	W0906 23:48:30.226211  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:30.226238  133633 retry.go:31] will retry after 16.988277297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:30.351003  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:30.559246  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:30.564019  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:30.851780  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:31.053858  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:31.065787  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:31.352303  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:31.551650  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:31.562842  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:31.853150  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:32.052341  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:32.062271  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:32.351126  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:32.550807  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:32.563561  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:32.850631  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:33.051531  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:33.062629  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:33.351423  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:33.551766  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:33.564588  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:33.852921  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:34.052567  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:34.064729  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:34.353126  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:34.555039  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:34.563647  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:34.853258  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:35.052473  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:35.063921  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:35.352933  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:35.554588  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:35.566037  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:35.851268  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:36.053124  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:36.064566  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:36.353843  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:36.550832  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:36.564938  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:36.851521  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:37.052655  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:37.062294  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:37.352966  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:37.555287  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:37.563934  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:37.852908  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:38.053475  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:38.062368  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:38.352303  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:38.725197  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:38.726441  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:38.854185  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:39.056581  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:39.064358  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:39.350429  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:39.555529  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:39.567742  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:39.851545  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:40.052990  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:40.063449  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:40.351791  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:40.553537  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:40.562017  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:40.853123  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:41.065321  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:41.080982  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:41.354956  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:41.552250  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:41.563800  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:41.850837  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:42.055058  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:42.063680  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:42.350704  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:42.551696  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:42.563294  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:42.850965  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:43.062722  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:43.069518  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:43.358743  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:43.553080  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:43.564128  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:43.854410  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:44.059010  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:44.064637  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:44.350906  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:44.550479  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:44.563132  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:44.851397  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:45.051517  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:45.066420  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:45.408095  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:45.554405  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:45.566765  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:45.853924  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:46.052165  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:46.061725  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:46.357623  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:46.556305  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:46.562406  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:46.851758  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:47.052443  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:47.154011  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:47.215182  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:48:47.351812  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:47.551587  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:47.564413  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:47.851766  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:48.326023  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:48.326027  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:48.359238  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:48.555274  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:48.562821  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:48.768570  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.553331761s)
	W0906 23:48:48.768633  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:48.768671  133633 retry.go:31] will retry after 12.645997331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:48:48.857028  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:49.058411  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:49.062806  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:49.361733  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:49.556159  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:49.655678  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:49.851038  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:50.053143  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:50.062947  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:50.354007  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:50.574145  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:50.582547  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:50.858061  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:51.060619  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:51.065716  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:51.351817  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:51.553472  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:51.567804  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:51.851836  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:52.053665  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:52.062958  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:52.355162  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:52.551265  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:52.564892  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:52.851469  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:53.053659  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:53.063106  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:53.353797  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:53.559548  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:53.566489  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:53.850506  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:54.054287  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:54.064709  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:54.351268  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:54.551575  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:54.563392  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:54.850870  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:55.055274  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:55.065662  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:55.351189  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:55.554469  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:55.565855  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:55.854563  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:56.055535  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:56.064599  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:56.350688  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:56.552861  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:56.564218  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:56.850398  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:57.220900  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:57.221470  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:57.351280  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:57.551950  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:57.562762  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:57.861258  133633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 23:48:58.051293  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:58.063665  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:58.352071  133633 kapi.go:107] duration metric: took 1m14.505680162s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 23:48:58.551562  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:58.562663  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:59.142193  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:48:59.144638  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:59.556110  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:48:59.563037  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:49:00.052964  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:00.065754  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:49:00.553622  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:00.563897  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:49:01.053012  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:01.065960  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:49:01.415450  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0906 23:49:01.555417  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:01.563880  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:49:02.052161  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:02.063922  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 23:49:02.555172  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:02.562630  133633 kapi.go:107] duration metric: took 1m14.003741387s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 23:49:02.564388  133633 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-331285 cluster.
	I0906 23:49:02.565457  133633 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 23:49:02.566476  133633 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 23:49:02.598618  133633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.183118199s)
	W0906 23:49:02.598658  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:49:02.598685  133633 retry.go:31] will retry after 19.707136491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:49:03.052555  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:03.550902  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:04.051627  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:04.553844  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:05.052364  133633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 23:49:05.552340  133633 kapi.go:107] duration metric: took 1m19.50524597s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 23:49:22.308419  133633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0906 23:49:23.047659  133633 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0906 23:49:23.047817  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:49:23.047844  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:49:23.048126  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:49:23.048149  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 23:49:23.048161  133633 main.go:141] libmachine: (addons-331285) DBG | Closing plugin on server side
	I0906 23:49:23.048165  133633 main.go:141] libmachine: Making call to close driver server
	I0906 23:49:23.048176  133633 main.go:141] libmachine: (addons-331285) Calling .Close
	I0906 23:49:23.048418  133633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 23:49:23.048439  133633 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 23:49:23.048577  133633 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0906 23:49:23.050562  133633 out.go:179] * Enabled addons: registry-creds, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0906 23:49:23.051829  133633 addons.go:514] duration metric: took 1m48.695584917s for enable addons: enabled=[registry-creds cloud-spanner storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin yakd metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0906 23:49:23.051883  133633 start.go:246] waiting for cluster config update ...
	I0906 23:49:23.051908  133633 start.go:255] writing updated cluster config ...
	I0906 23:49:23.052224  133633 ssh_runner.go:195] Run: rm -f paused
	I0906 23:49:23.058633  133633 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0906 23:49:23.064160  133633 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-67v9n" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.071303  133633 pod_ready.go:94] pod "coredns-66bc5c9577-67v9n" is "Ready"
	I0906 23:49:23.071337  133633 pod_ready.go:86] duration metric: took 7.148414ms for pod "coredns-66bc5c9577-67v9n" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.074554  133633 pod_ready.go:83] waiting for pod "etcd-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.080320  133633 pod_ready.go:94] pod "etcd-addons-331285" is "Ready"
	I0906 23:49:23.080349  133633 pod_ready.go:86] duration metric: took 5.769961ms for pod "etcd-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.083126  133633 pod_ready.go:83] waiting for pod "kube-apiserver-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.089234  133633 pod_ready.go:94] pod "kube-apiserver-addons-331285" is "Ready"
	I0906 23:49:23.089263  133633 pod_ready.go:86] duration metric: took 6.10482ms for pod "kube-apiserver-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.091922  133633 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.470626  133633 pod_ready.go:94] pod "kube-controller-manager-addons-331285" is "Ready"
	I0906 23:49:23.470655  133633 pod_ready.go:86] duration metric: took 378.71069ms for pod "kube-controller-manager-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:23.664656  133633 pod_ready.go:83] waiting for pod "kube-proxy-rlcmf" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:24.063729  133633 pod_ready.go:94] pod "kube-proxy-rlcmf" is "Ready"
	I0906 23:49:24.063764  133633 pod_ready.go:86] duration metric: took 399.077841ms for pod "kube-proxy-rlcmf" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:24.263667  133633 pod_ready.go:83] waiting for pod "kube-scheduler-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:24.662997  133633 pod_ready.go:94] pod "kube-scheduler-addons-331285" is "Ready"
	I0906 23:49:24.663028  133633 pod_ready.go:86] duration metric: took 399.333497ms for pod "kube-scheduler-addons-331285" in "kube-system" namespace to be "Ready" or be gone ...
	I0906 23:49:24.663040  133633 pod_ready.go:40] duration metric: took 1.604369041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0906 23:49:24.720864  133633 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0906 23:49:24.722391  133633 out.go:179] * Done! kubectl is now configured to use "addons-331285" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.872723376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09542dddc03b434877be448e72be5eca6a44af5e1e0fb5cb68d23be2e12456d0,PodSandboxId:fb7c5666bf667a71d6b190a32e7d1e6c210b5ed713c8288f9ae098c179ed9a83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757202763692888757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-96n4f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6798a3d3-e72a-4664-9033-87b9fe51173c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5977190586af36cc6fdd34016c19966f47df74e0b11243d29ae90198841df0,PodSandboxId:4fa8f73dc8b173c19e58111b0085207111b8f4cfa4dc145a31c585b157e01ec9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757202619860913440,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a8d4be2-7a52-401c-a82d-77a73e46f2f9,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d457f4cc282b4397cb5b2e1e59024450ee1118f71799ae2f631bb7a4bef37fb,PodSandboxId:1cc361bbdae190a6c4ecb28d15e03000edd55f6a29a1659f7368efd14b4bbe1b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757202568518221139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd3aa778-0ef7-4c6b-b0
16-5ecebb8228bd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3bf2ab786de58722d4bdbca91e5159f455ebb8683e82e4c16ea2a3dad15f26,PodSandboxId:7921dd657e42fe5e857d30ac3e52bdbee132ac016a8e3290b446f0a052b67332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757202537388954969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-npqqc,io.kubernetes.pod.namespace: ingress-nginx,io.
kubernetes.pod.uid: cd305d82-748d-43f8-a724-6d939a00d8f5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6b3982d69c5f0dfd4c37a2ace50fc160fb301a435029dbb330f29726647ab83c,PodSandboxId:9a51817012e64e869aa3026f1820a6cc23d5433e80fde9fa4af323ef78bc06c5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33
d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518989578439,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l52r5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ccad2890-ceef-42a9-8191-8bd7745b7eeb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8981f649493de9df18b72e43265d0c220a1eb7131a741c468c1f0c8adaf4393,PodSandboxId:714e5039ed684e034ba7d8843e4b049c104f0d79ff500eff9522d68ad3b24a26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4
966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518812364181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-956t7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97ef8fc6-4b48-48d9-9f2a-9283d29daff0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceedf2465a1414f9d405aae2b1321168d38ed1b2150f7667c108f9d9aafe36fe,PodSandboxId:5381f8e89f09405014d170372541720ff8ad6ecbde9e00478e81fbae8cd74479,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757202512713287506,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-ptb8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 45516a04-8aa0-480d-bf07-793b7f0bf255,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfe0c1cb2a908c5b68099301f88f97962405e169e2ae0fe57037906ac618cad,PodSandboxId:44143c5a0124147cd385dab0016117d8a8db3e2a617af8c3fee411174d153660,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757202498242496764,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a8663e-cf04-4ab4-b1f1-e3ce8ece965a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02216af4ba7c7d025d7f6
f55c450215735b5c83e572b67a41d94adeb0f759d10,PodSandboxId:2b9b560472fc221c612be3c22b49df505dbc97e69ce0672c8a8b97bf0c7ae135,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757202483735948050,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z9zkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc981e1c-85ab-43e2-b105-c233dd666280,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:07e36b4b2724d8f0abfee562d33ba48d531b8800b85cd89e6490aeea753f97a6,PodSandboxId:c6ecd1dcc3de192186c02c6212fc7c1c19265f01f193c4bfe09f75fb9cf966c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757202464189173033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef7a7827-63f2-45d2-8b18-629c9a489e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:9332b2c107d834bbd8d178408445f0e3c9b72df562f5bf916bf719cb6f78056d,PodSandboxId:7a01caa3192741088493eb7e346f378ceb3fd1202cfcf2d277538135c29f539c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757202455818917658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67v9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc0933b-2694-461e-bbbf-b800563b3faa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8af6ac1753522d4476f0e7b97360e0c2e628ab07610f81d449e41f416e45c3,PodSandboxId:d09f18e7a35fa1218638b751a2327349c95b93096b56abd8fac25158620f27cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757202454956303114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rlcmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20e0c53-acd0-4280
-b2b7-5de6601d6ece,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032f26382aefcaee692b5df4880e61aa652f6bcb177f54a0fd236f5d61f229e6,PodSandboxId:4b486b63668925efe003d58fd03cc2da688dff00c3f5b3edccef0e05eca2fcbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757202442887207815,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7900656f0851e79c53d95a16b150d0e0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9de2c4fcc08ac3b64fb6e3310de6429a49c53f058ad251c09e54a4d27338f1,PodSandboxId:a26ca96f99ce97ce43444bb059a992bf324a570a65c05e08006d99b49f441c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757202442913225729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-33
1285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308c1f730469111a51438cfe49590d51,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35340203ba4bade23403aaeebfd3d94ed4e5ed6a6e6db4e3cbb900fb8354cc5f,PodSandboxId:c9a4daee747877edcb7f913e590fdc9a6499f0b8cc69d77ab706088529ca4fe2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757202442848887738,Labels:map[s
tring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee818145503a1329d2b7c0e2762d43,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8037b2280b2c53c1553319dd93d7047a30ee67f4565d27dab6cfa3a4a8837956,PodSandboxId:26abd3f25972f66e25a02cd2c472676d4f8227d67509be8d802b1dcd259a3a78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27
d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757202442855888217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 915a0dc0907caebe0bce77b811405cab,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d1f984d-c84e-4620-91c6-5144d2d9b05b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.881543601Z" level=debug msg="Request: &ExecSyncRequest{ContainerId:ceedf2465a1414f9d405aae2b1321168d38ed1b2150f7667c108f9d9aafe36fe,Cmd:[/bin/gadgettracermanager -liveness],Timeout:2,}" file="otel-collector/interceptors.go:62" id=a216cb70-971c-4e19-a6ba-6af211a555ae name=/runtime.v1.RuntimeService/ExecSync
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.929376440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2bc5a53-559d-4e12-a45f-369b8368c7f2 name=/runtime.v1.RuntimeService/Version
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.930318263Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2bc5a53-559d-4e12-a45f-369b8368c7f2 name=/runtime.v1.RuntimeService/Version
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.933126359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7405e15-413b-4598-9b71-868d2b924435 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.934386527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757202763934357695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605485,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7405e15-413b-4598-9b71-868d2b924435 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.935478139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52a42607-ea06-416f-ae67-3963e84ff7f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.935551419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52a42607-ea06-416f-ae67-3963e84ff7f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.936871283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09542dddc03b434877be448e72be5eca6a44af5e1e0fb5cb68d23be2e12456d0,PodSandboxId:fb7c5666bf667a71d6b190a32e7d1e6c210b5ed713c8288f9ae098c179ed9a83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757202763692888757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-96n4f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6798a3d3-e72a-4664-9033-87b9fe51173c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5977190586af36cc6fdd34016c19966f47df74e0b11243d29ae90198841df0,PodSandboxId:4fa8f73dc8b173c19e58111b0085207111b8f4cfa4dc145a31c585b157e01ec9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757202619860913440,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a8d4be2-7a52-401c-a82d-77a73e46f2f9,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d457f4cc282b4397cb5b2e1e59024450ee1118f71799ae2f631bb7a4bef37fb,PodSandboxId:1cc361bbdae190a6c4ecb28d15e03000edd55f6a29a1659f7368efd14b4bbe1b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757202568518221139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd3aa778-0ef7-4c6b-b0
16-5ecebb8228bd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3bf2ab786de58722d4bdbca91e5159f455ebb8683e82e4c16ea2a3dad15f26,PodSandboxId:7921dd657e42fe5e857d30ac3e52bdbee132ac016a8e3290b446f0a052b67332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757202537388954969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-npqqc,io.kubernetes.pod.namespace: ingress-nginx,io.
kubernetes.pod.uid: cd305d82-748d-43f8-a724-6d939a00d8f5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6b3982d69c5f0dfd4c37a2ace50fc160fb301a435029dbb330f29726647ab83c,PodSandboxId:9a51817012e64e869aa3026f1820a6cc23d5433e80fde9fa4af323ef78bc06c5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33
d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518989578439,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l52r5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ccad2890-ceef-42a9-8191-8bd7745b7eeb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8981f649493de9df18b72e43265d0c220a1eb7131a741c468c1f0c8adaf4393,PodSandboxId:714e5039ed684e034ba7d8843e4b049c104f0d79ff500eff9522d68ad3b24a26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4
966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518812364181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-956t7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97ef8fc6-4b48-48d9-9f2a-9283d29daff0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceedf2465a1414f9d405aae2b1321168d38ed1b2150f7667c108f9d9aafe36fe,PodSandboxId:5381f8e89f09405014d170372541720ff8ad6ecbde9e00478e81fbae8cd74479,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757202512713287506,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-ptb8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 45516a04-8aa0-480d-bf07-793b7f0bf255,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfe0c1cb2a908c5b68099301f88f97962405e169e2ae0fe57037906ac618cad,PodSandboxId:44143c5a0124147cd385dab0016117d8a8db3e2a617af8c3fee411174d153660,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757202498242496764,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a8663e-cf04-4ab4-b1f1-e3ce8ece965a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02216af4ba7c7d025d7f6
f55c450215735b5c83e572b67a41d94adeb0f759d10,PodSandboxId:2b9b560472fc221c612be3c22b49df505dbc97e69ce0672c8a8b97bf0c7ae135,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757202483735948050,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z9zkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc981e1c-85ab-43e2-b105-c233dd666280,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:07e36b4b2724d8f0abfee562d33ba48d531b8800b85cd89e6490aeea753f97a6,PodSandboxId:c6ecd1dcc3de192186c02c6212fc7c1c19265f01f193c4bfe09f75fb9cf966c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757202464189173033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef7a7827-63f2-45d2-8b18-629c9a489e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:9332b2c107d834bbd8d178408445f0e3c9b72df562f5bf916bf719cb6f78056d,PodSandboxId:7a01caa3192741088493eb7e346f378ceb3fd1202cfcf2d277538135c29f539c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757202455818917658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67v9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc0933b-2694-461e-bbbf-b800563b3faa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8af6ac1753522d4476f0e7b97360e0c2e628ab07610f81d449e41f416e45c3,PodSandboxId:d09f18e7a35fa1218638b751a2327349c95b93096b56abd8fac25158620f27cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757202454956303114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rlcmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20e0c53-acd0-4280
-b2b7-5de6601d6ece,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032f26382aefcaee692b5df4880e61aa652f6bcb177f54a0fd236f5d61f229e6,PodSandboxId:4b486b63668925efe003d58fd03cc2da688dff00c3f5b3edccef0e05eca2fcbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757202442887207815,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7900656f0851e79c53d95a16b150d0e0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9de2c4fcc08ac3b64fb6e3310de6429a49c53f058ad251c09e54a4d27338f1,PodSandboxId:a26ca96f99ce97ce43444bb059a992bf324a570a65c05e08006d99b49f441c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757202442913225729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-33
1285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308c1f730469111a51438cfe49590d51,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35340203ba4bade23403aaeebfd3d94ed4e5ed6a6e6db4e3cbb900fb8354cc5f,PodSandboxId:c9a4daee747877edcb7f913e590fdc9a6499f0b8cc69d77ab706088529ca4fe2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757202442848887738,Labels:map[s
tring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee818145503a1329d2b7c0e2762d43,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8037b2280b2c53c1553319dd93d7047a30ee67f4565d27dab6cfa3a4a8837956,PodSandboxId:26abd3f25972f66e25a02cd2c472676d4f8227d67509be8d802b1dcd259a3a78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27
d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757202442855888217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 915a0dc0907caebe0bce77b811405cab,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52a42607-ea06-416f-ae67-3963e84ff7f8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.980972071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58ddb446-c8bb-4e49-a45a-6be4b619f420 name=/runtime.v1.RuntimeService/Version
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.981063286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58ddb446-c8bb-4e49-a45a-6be4b619f420 name=/runtime.v1.RuntimeService/Version
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.984552808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0dffe5f-efac-4b03-828b-66bcf37305c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.987206337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757202763987177205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605485,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0dffe5f-efac-4b03-828b-66bcf37305c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.988052218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e5a6cb0-ec80-46be-aba2-b3b7bd0b79af name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.988126568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e5a6cb0-ec80-46be-aba2-b3b7bd0b79af name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:43 addons-331285 crio[824]: time="2025-09-06 23:52:43.988571787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09542dddc03b434877be448e72be5eca6a44af5e1e0fb5cb68d23be2e12456d0,PodSandboxId:fb7c5666bf667a71d6b190a32e7d1e6c210b5ed713c8288f9ae098c179ed9a83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757202763692888757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-96n4f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6798a3d3-e72a-4664-9033-87b9fe51173c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5977190586af36cc6fdd34016c19966f47df74e0b11243d29ae90198841df0,PodSandboxId:4fa8f73dc8b173c19e58111b0085207111b8f4cfa4dc145a31c585b157e01ec9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757202619860913440,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a8d4be2-7a52-401c-a82d-77a73e46f2f9,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d457f4cc282b4397cb5b2e1e59024450ee1118f71799ae2f631bb7a4bef37fb,PodSandboxId:1cc361bbdae190a6c4ecb28d15e03000edd55f6a29a1659f7368efd14b4bbe1b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757202568518221139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd3aa778-0ef7-4c6b-b0
16-5ecebb8228bd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3bf2ab786de58722d4bdbca91e5159f455ebb8683e82e4c16ea2a3dad15f26,PodSandboxId:7921dd657e42fe5e857d30ac3e52bdbee132ac016a8e3290b446f0a052b67332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757202537388954969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-npqqc,io.kubernetes.pod.namespace: ingress-nginx,io.
kubernetes.pod.uid: cd305d82-748d-43f8-a724-6d939a00d8f5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6b3982d69c5f0dfd4c37a2ace50fc160fb301a435029dbb330f29726647ab83c,PodSandboxId:9a51817012e64e869aa3026f1820a6cc23d5433e80fde9fa4af323ef78bc06c5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33
d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518989578439,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l52r5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ccad2890-ceef-42a9-8191-8bd7745b7eeb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8981f649493de9df18b72e43265d0c220a1eb7131a741c468c1f0c8adaf4393,PodSandboxId:714e5039ed684e034ba7d8843e4b049c104f0d79ff500eff9522d68ad3b24a26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4
966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518812364181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-956t7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97ef8fc6-4b48-48d9-9f2a-9283d29daff0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceedf2465a1414f9d405aae2b1321168d38ed1b2150f7667c108f9d9aafe36fe,PodSandboxId:5381f8e89f09405014d170372541720ff8ad6ecbde9e00478e81fbae8cd74479,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757202512713287506,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-ptb8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 45516a04-8aa0-480d-bf07-793b7f0bf255,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfe0c1cb2a908c5b68099301f88f97962405e169e2ae0fe57037906ac618cad,PodSandboxId:44143c5a0124147cd385dab0016117d8a8db3e2a617af8c3fee411174d153660,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757202498242496764,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a8663e-cf04-4ab4-b1f1-e3ce8ece965a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02216af4ba7c7d025d7f6
f55c450215735b5c83e572b67a41d94adeb0f759d10,PodSandboxId:2b9b560472fc221c612be3c22b49df505dbc97e69ce0672c8a8b97bf0c7ae135,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757202483735948050,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z9zkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc981e1c-85ab-43e2-b105-c233dd666280,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:07e36b4b2724d8f0abfee562d33ba48d531b8800b85cd89e6490aeea753f97a6,PodSandboxId:c6ecd1dcc3de192186c02c6212fc7c1c19265f01f193c4bfe09f75fb9cf966c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757202464189173033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef7a7827-63f2-45d2-8b18-629c9a489e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:9332b2c107d834bbd8d178408445f0e3c9b72df562f5bf916bf719cb6f78056d,PodSandboxId:7a01caa3192741088493eb7e346f378ceb3fd1202cfcf2d277538135c29f539c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757202455818917658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67v9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc0933b-2694-461e-bbbf-b800563b3faa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8af6ac1753522d4476f0e7b97360e0c2e628ab07610f81d449e41f416e45c3,PodSandboxId:d09f18e7a35fa1218638b751a2327349c95b93096b56abd8fac25158620f27cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757202454956303114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rlcmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20e0c53-acd0-4280
-b2b7-5de6601d6ece,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032f26382aefcaee692b5df4880e61aa652f6bcb177f54a0fd236f5d61f229e6,PodSandboxId:4b486b63668925efe003d58fd03cc2da688dff00c3f5b3edccef0e05eca2fcbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757202442887207815,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7900656f0851e79c53d95a16b150d0e0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9de2c4fcc08ac3b64fb6e3310de6429a49c53f058ad251c09e54a4d27338f1,PodSandboxId:a26ca96f99ce97ce43444bb059a992bf324a570a65c05e08006d99b49f441c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757202442913225729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-33
1285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308c1f730469111a51438cfe49590d51,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35340203ba4bade23403aaeebfd3d94ed4e5ed6a6e6db4e3cbb900fb8354cc5f,PodSandboxId:c9a4daee747877edcb7f913e590fdc9a6499f0b8cc69d77ab706088529ca4fe2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757202442848887738,Labels:map[s
tring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee818145503a1329d2b7c0e2762d43,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8037b2280b2c53c1553319dd93d7047a30ee67f4565d27dab6cfa3a4a8837956,PodSandboxId:26abd3f25972f66e25a02cd2c472676d4f8227d67509be8d802b1dcd259a3a78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27
d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757202442855888217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 915a0dc0907caebe0bce77b811405cab,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e5a6cb0-ec80-46be-aba2-b3b7bd0b79af name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.032728441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c7265bc-d406-4639-99ae-d985f430f4e0 name=/runtime.v1.RuntimeService/Version
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.032924049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c7265bc-d406-4639-99ae-d985f430f4e0 name=/runtime.v1.RuntimeService/Version
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.036490399Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1109855b-4f38-4a3b-b15b-66ef36e84f2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.040252627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757202764040112254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:605485,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1109855b-4f38-4a3b-b15b-66ef36e84f2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.041968899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a867082-6541-4007-afa1-b3a0c0bdf128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.042442825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a867082-6541-4007-afa1-b3a0c0bdf128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.043443943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:09542dddc03b434877be448e72be5eca6a44af5e1e0fb5cb68d23be2e12456d0,PodSandboxId:fb7c5666bf667a71d6b190a32e7d1e6c210b5ed713c8288f9ae098c179ed9a83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1757202763692888757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-96n4f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6798a3d3-e72a-4664-9033-87b9fe51173c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc5977190586af36cc6fdd34016c19966f47df74e0b11243d29ae90198841df0,PodSandboxId:4fa8f73dc8b173c19e58111b0085207111b8f4cfa4dc145a31c585b157e01ec9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1757202619860913440,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a8d4be2-7a52-401c-a82d-77a73e46f2f9,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d457f4cc282b4397cb5b2e1e59024450ee1118f71799ae2f631bb7a4bef37fb,PodSandboxId:1cc361bbdae190a6c4ecb28d15e03000edd55f6a29a1659f7368efd14b4bbe1b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1757202568518221139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd3aa778-0ef7-4c6b-b0
16-5ecebb8228bd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3bf2ab786de58722d4bdbca91e5159f455ebb8683e82e4c16ea2a3dad15f26,PodSandboxId:7921dd657e42fe5e857d30ac3e52bdbee132ac016a8e3290b446f0a052b67332,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1757202537388954969,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-npqqc,io.kubernetes.pod.namespace: ingress-nginx,io.
kubernetes.pod.uid: cd305d82-748d-43f8-a724-6d939a00d8f5,},Annotations:map[string]string{io.kubernetes.container.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6b3982d69c5f0dfd4c37a2ace50fc160fb301a435029dbb330f29726647ab83c,PodSandboxId:9a51817012e64e869aa3026f1820a6cc23d5433e80fde9fa4af323ef78bc06c5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33
d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518989578439,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l52r5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ccad2890-ceef-42a9-8191-8bd7745b7eeb,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8981f649493de9df18b72e43265d0c220a1eb7131a741c468c1f0c8adaf4393,PodSandboxId:714e5039ed684e034ba7d8843e4b049c104f0d79ff500eff9522d68ad3b24a26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4
966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1757202518812364181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-956t7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 97ef8fc6-4b48-48d9-9f2a-9283d29daff0,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceedf2465a1414f9d405aae2b1321168d38ed1b2150f7667c108f9d9aafe36fe,PodSandboxId:5381f8e89f09405014d170372541720ff8ad6ecbde9e00478e81fbae8cd74479,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor
-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1757202512713287506,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-ptb8b,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 45516a04-8aa0-480d-bf07-793b7f0bf255,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bfe0c1cb2a908c5b68099301f88f97962405e169e2ae0fe57037906ac618cad,PodSandboxId:44143c5a0124147cd385dab0016117d8a8db3e2a617af8c3fee411174d153660,Metadata:&Con
tainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1757202498242496764,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a8663e-cf04-4ab4-b1f1-e3ce8ece965a,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02216af4ba7c7d025d7f6
f55c450215735b5c83e572b67a41d94adeb0f759d10,PodSandboxId:2b9b560472fc221c612be3c22b49df505dbc97e69ce0672c8a8b97bf0c7ae135,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1757202483735948050,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-z9zkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc981e1c-85ab-43e2-b105-c233dd666280,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:07e36b4b2724d8f0abfee562d33ba48d531b8800b85cd89e6490aeea753f97a6,PodSandboxId:c6ecd1dcc3de192186c02c6212fc7c1c19265f01f193c4bfe09f75fb9cf966c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757202464189173033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef7a7827-63f2-45d2-8b18-629c9a489e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:9332b2c107d834bbd8d178408445f0e3c9b72df562f5bf916bf719cb6f78056d,PodSandboxId:7a01caa3192741088493eb7e346f378ceb3fd1202cfcf2d277538135c29f539c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1757202455818917658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-67v9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc0933b-2694-461e-bbbf-b800563b3faa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\
",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8af6ac1753522d4476f0e7b97360e0c2e628ab07610f81d449e41f416e45c3,PodSandboxId:d09f18e7a35fa1218638b751a2327349c95b93096b56abd8fac25158620f27cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1757202454956303114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rlcmf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f20e0c53-acd0-4280
-b2b7-5de6601d6ece,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032f26382aefcaee692b5df4880e61aa652f6bcb177f54a0fd236f5d61f229e6,PodSandboxId:4b486b63668925efe003d58fd03cc2da688dff00c3f5b3edccef0e05eca2fcbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1757202442887207815,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7900656f0851e79c53d95a16b150d0e0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9de2c4fcc08ac3b64fb6e3310de6429a49c53f058ad251c09e54a4d27338f1,PodSandboxId:a26ca96f99ce97ce43444bb059a992bf324a570a65c05e08006d99b49f441c23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1757202442913225729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-33
1285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 308c1f730469111a51438cfe49590d51,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35340203ba4bade23403aaeebfd3d94ed4e5ed6a6e6db4e3cbb900fb8354cc5f,PodSandboxId:c9a4daee747877edcb7f913e590fdc9a6499f0b8cc69d77ab706088529ca4fe2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1757202442848887738,Labels:map[s
tring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ee818145503a1329d2b7c0e2762d43,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8037b2280b2c53c1553319dd93d7047a30ee67f4565d27dab6cfa3a4a8837956,PodSandboxId:26abd3f25972f66e25a02cd2c472676d4f8227d67509be8d802b1dcd259a3a78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27
d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1757202442855888217,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-331285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 915a0dc0907caebe0bce77b811405cab,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a867082-6541-4007-afa1-b3a0c0bdf128 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.045891548Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:670" id=a216cb70-971c-4e19-a6ba-6af211a555ae name=/runtime.v1.RuntimeService/ExecSync
	Sep 06 23:52:44 addons-331285 crio[824]: time="2025-09-06 23:52:44.046065102Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=a216cb70-971c-4e19-a6ba-6af211a555ae name=/runtime.v1.RuntimeService/ExecSync
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	09542dddc03b4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   fb7c5666bf667       hello-world-app-5d498dc89-96n4f
	cc5977190586a       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   4fa8f73dc8b17       nginx
	6d457f4cc282b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   1cc361bbdae19       busybox
	6e3bf2ab786de       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   7921dd657e42f       ingress-nginx-controller-9cc49f96f-npqqc
	6b3982d69c5f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              patch                     0                   9a51817012e64       ingress-nginx-admission-patch-l52r5
	a8981f649493d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   714e5039ed684       ingress-nginx-admission-create-956t7
	ceedf2465a141       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            4 minutes ago            Running             gadget                    0                   5381f8e89f094       gadget-ptb8b
	0bfe0c1cb2a90       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago            Running             minikube-ingress-dns      0                   44143c5a01241       kube-ingress-dns-minikube
	02216af4ba7c7       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   2b9b560472fc2       amd-gpu-device-plugin-z9zkw
	07e36b4b2724d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   c6ecd1dcc3de1       storage-provisioner
	9332b2c107d83       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago            Running             coredns                   0                   7a01caa319274       coredns-66bc5c9577-67v9n
	5d8af6ac17535       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago            Running             kube-proxy                0                   d09f18e7a35fa       kube-proxy-rlcmf
	da9de2c4fcc08       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago            Running             kube-controller-manager   0                   a26ca96f99ce9       kube-controller-manager-addons-331285
	032f26382aefc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago            Running             etcd                      0                   4b486b6366892       etcd-addons-331285
	8037b2280b2c5       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago            Running             kube-apiserver            0                   26abd3f25972f       kube-apiserver-addons-331285
	35340203ba4ba       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago            Running             kube-scheduler            0                   c9a4daee74787       kube-scheduler-addons-331285
	
	
	==> coredns [9332b2c107d834bbd8d178408445f0e3c9b72df562f5bf916bf719cb6f78056d] <==
	[INFO] 10.244.0.8:57940 - 34113 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000980685s
	[INFO] 10.244.0.8:57940 - 42251 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00016964s
	[INFO] 10.244.0.8:57940 - 60739 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00011447s
	[INFO] 10.244.0.8:57940 - 36230 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000092012s
	[INFO] 10.244.0.8:57940 - 12998 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00016136s
	[INFO] 10.244.0.8:57940 - 45551 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000224026s
	[INFO] 10.244.0.8:57940 - 32549 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000154255s
	[INFO] 10.244.0.8:45073 - 26104 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163137s
	[INFO] 10.244.0.8:45073 - 26414 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160493s
	[INFO] 10.244.0.8:44853 - 13910 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081573s
	[INFO] 10.244.0.8:44853 - 13679 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000264355s
	[INFO] 10.244.0.8:35429 - 39659 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092746s
	[INFO] 10.244.0.8:35429 - 39379 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000635683s
	[INFO] 10.244.0.8:42378 - 56104 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115899s
	[INFO] 10.244.0.8:42378 - 56322 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000121121s
	[INFO] 10.244.0.23:53453 - 33901 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000472617s
	[INFO] 10.244.0.23:36167 - 41270 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000204853s
	[INFO] 10.244.0.23:54489 - 6689 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000980149s
	[INFO] 10.244.0.23:56828 - 30750 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128118s
	[INFO] 10.244.0.23:60937 - 60125 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000677311s
	[INFO] 10.244.0.23:55362 - 42505 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129271s
	[INFO] 10.244.0.23:48497 - 22235 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004386711s
	[INFO] 10.244.0.23:57684 - 60341 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.004711638s
	[INFO] 10.244.0.28:48935 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001484025s
	[INFO] 10.244.0.28:54392 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153347s
	
	
	==> describe nodes <==
	Name:               addons-331285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-331285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d
	                    minikube.k8s.io/name=addons-331285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_06T23_47_29_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-331285
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Sep 2025 23:47:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-331285
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Sep 2025 23:52:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Sep 2025 23:50:33 +0000   Sat, 06 Sep 2025 23:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Sep 2025 23:50:33 +0000   Sat, 06 Sep 2025 23:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Sep 2025 23:50:33 +0000   Sat, 06 Sep 2025 23:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Sep 2025 23:50:33 +0000   Sat, 06 Sep 2025 23:47:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    addons-331285
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 629ff07508a7465b9886ba4663fab41c
	  System UUID:                629ff075-08a7-465b-9886-ba4663fab41c
	  Boot ID:                    c851151a-f9e7-42ec-8bf5-5ebb57de6bfe
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	  default                     hello-world-app-5d498dc89-96n4f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-ptb8b                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-npqqc    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m1s
	  kube-system                 amd-gpu-device-plugin-z9zkw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 coredns-66bc5c9577-67v9n                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m10s
	  kube-system                 etcd-addons-331285                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m16s
	  kube-system                 kube-apiserver-addons-331285                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-addons-331285       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-proxy-rlcmf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-scheduler-addons-331285                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m23s (x8 over 5m23s)  kubelet          Node addons-331285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x8 over 5m23s)  kubelet          Node addons-331285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x7 over 5m23s)  kubelet          Node addons-331285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s                  kubelet          Node addons-331285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s                  kubelet          Node addons-331285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s                  kubelet          Node addons-331285 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m15s                  kubelet          Node addons-331285 status is now: NodeReady
	  Normal  RegisteredNode           5m12s                  node-controller  Node addons-331285 event: Registered Node addons-331285 in Controller
	
	
	==> dmesg <==
	[  +0.538928] kauditd_printk_skb: 233 callbacks suppressed
	[  +0.101320] kauditd_printk_skb: 352 callbacks suppressed
	[Sep 6 23:48] kauditd_printk_skb: 111 callbacks suppressed
	[  +7.005928] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.301128] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.742971] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.725586] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.103656] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.002115] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.799988] kauditd_printk_skb: 149 callbacks suppressed
	[  +0.078059] kauditd_printk_skb: 37 callbacks suppressed
	[Sep 6 23:49] kauditd_printk_skb: 53 callbacks suppressed
	[  +8.997302] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000091] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.176542] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.065053] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.182353] kauditd_printk_skb: 164 callbacks suppressed
	[Sep 6 23:50] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.282782] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.238644] kauditd_printk_skb: 63 callbacks suppressed
	[  +6.955693] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.388219] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.473218] kauditd_printk_skb: 5 callbacks suppressed
	[Sep 6 23:52] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [032f26382aefcaee692b5df4880e61aa652f6bcb177f54a0fd236f5d61f229e6] <==
	{"level":"info","ts":"2025-09-06T23:48:48.310695Z","caller":"traceutil/trace.go:172","msg":"trace[275579538] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1123; }","duration":"263.084708ms","start":"2025-09-06T23:48:48.047602Z","end":"2025-09-06T23:48:48.310687Z","steps":["trace[275579538] 'agreement among raft nodes before linearized reading'  (duration: 262.798903ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-06T23:48:48.311863Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.352404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-06T23:48:48.311913Z","caller":"traceutil/trace.go:172","msg":"trace[43472191] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1123; }","duration":"252.45016ms","start":"2025-09-06T23:48:48.059454Z","end":"2025-09-06T23:48:48.311904Z","steps":["trace[43472191] 'agreement among raft nodes before linearized reading'  (duration: 251.617209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-06T23:48:48.313388Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.972027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/gadget/gadget\" limit:1 ","response":"range_response_count:1 size:2184"}
	{"level":"info","ts":"2025-09-06T23:48:48.313789Z","caller":"traceutil/trace.go:172","msg":"trace[1027488622] range","detail":"{range_begin:/registry/configmaps/gadget/gadget; range_end:; response_count:1; response_revision:1123; }","duration":"184.318696ms","start":"2025-09-06T23:48:48.129400Z","end":"2025-09-06T23:48:48.313719Z","steps":["trace[1027488622] 'agreement among raft nodes before linearized reading'  (duration: 182.620077ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-06T23:48:57.212388Z","caller":"traceutil/trace.go:172","msg":"trace[71857602] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1190; }","duration":"227.658957ms","start":"2025-09-06T23:48:56.984708Z","end":"2025-09-06T23:48:57.212367Z","steps":["trace[71857602] 'read index received'  (duration: 227.651476ms)","trace[71857602] 'applied index is now lower than readState.Index'  (duration: 6.565µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-06T23:48:57.212535Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.962603ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-06T23:48:57.212556Z","caller":"traceutil/trace.go:172","msg":"trace[630584733] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1159; }","duration":"227.999307ms","start":"2025-09-06T23:48:56.984551Z","end":"2025-09-06T23:48:57.212550Z","steps":["trace[630584733] 'agreement among raft nodes before linearized reading'  (duration: 227.942549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-06T23:48:57.213194Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"168.125652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-06T23:48:57.213224Z","caller":"traceutil/trace.go:172","msg":"trace[883355949] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1160; }","duration":"168.16062ms","start":"2025-09-06T23:48:57.045057Z","end":"2025-09-06T23:48:57.213217Z","steps":["trace[883355949] 'agreement among raft nodes before linearized reading'  (duration: 168.103397ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-06T23:48:57.213422Z","caller":"traceutil/trace.go:172","msg":"trace[497217395] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"310.491688ms","start":"2025-09-06T23:48:56.902924Z","end":"2025-09-06T23:48:57.213416Z","steps":["trace[497217395] 'process raft request'  (duration: 309.88486ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-06T23:48:57.213486Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-06T23:48:56.902907Z","time spent":"310.5311ms","remote":"127.0.0.1:45746","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1155 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-09-06T23:48:57.213578Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.449495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-06T23:48:57.213592Z","caller":"traceutil/trace.go:172","msg":"trace[1679437774] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1160; }","duration":"156.463802ms","start":"2025-09-06T23:48:57.057124Z","end":"2025-09-06T23:48:57.213587Z","steps":["trace[1679437774] 'agreement among raft nodes before linearized reading'  (duration: 156.438458ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-06T23:49:02.329657Z","caller":"traceutil/trace.go:172","msg":"trace[1035817454] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"125.366867ms","start":"2025-09-06T23:49:02.204235Z","end":"2025-09-06T23:49:02.329602Z","steps":["trace[1035817454] 'process raft request'  (duration: 125.231429ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-06T23:49:10.361000Z","caller":"traceutil/trace.go:172","msg":"trace[2000543446] transaction","detail":"{read_only:false; response_revision:1221; number_of_response:1; }","duration":"232.941804ms","start":"2025-09-06T23:49:10.128035Z","end":"2025-09-06T23:49:10.360977Z","steps":["trace[2000543446] 'process raft request'  (duration: 232.855346ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-06T23:50:07.057043Z","caller":"traceutil/trace.go:172","msg":"trace[410995227] linearizableReadLoop","detail":"{readStateIndex:1629; appliedIndex:1629; }","duration":"398.808578ms","start":"2025-09-06T23:50:06.658201Z","end":"2025-09-06T23:50:07.057010Z","steps":["trace[410995227] 'read index received'  (duration: 398.803268ms)","trace[410995227] 'applied index is now lower than readState.Index'  (duration: 4.512µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-06T23:50:07.057452Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"399.217589ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-06T23:50:07.058250Z","caller":"traceutil/trace.go:172","msg":"trace[1426137132] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents; range_end:; response_count:0; response_revision:1577; }","duration":"400.03363ms","start":"2025-09-06T23:50:06.658197Z","end":"2025-09-06T23:50:07.058231Z","steps":["trace[1426137132] 'agreement among raft nodes before linearized reading'  (duration: 399.187622ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-06T23:50:07.058300Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-06T23:50:06.658182Z","time spent":"400.095517ms","remote":"127.0.0.1:42002","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":0,"response size":29,"request content":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents\" limit:1 "}
	{"level":"warn","ts":"2025-09-06T23:50:07.057466Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"345.406274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-06T23:50:07.058473Z","caller":"traceutil/trace.go:172","msg":"trace[1119950674] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1577; }","duration":"346.415106ms","start":"2025-09-06T23:50:06.712048Z","end":"2025-09-06T23:50:07.058463Z","steps":["trace[1119950674] 'agreement among raft nodes before linearized reading'  (duration: 345.393193ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-06T23:50:07.058494Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-06T23:50:06.712034Z","time spent":"346.452921ms","remote":"127.0.0.1:45782","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-09-06T23:50:07.057277Z","caller":"traceutil/trace.go:172","msg":"trace[1571095130] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"425.017239ms","start":"2025-09-06T23:50:06.632248Z","end":"2025-09-06T23:50:07.057266Z","steps":["trace[1571095130] 'process raft request'  (duration: 424.874951ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-06T23:50:07.058748Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-06T23:50:06.632233Z","time spent":"426.459421ms","remote":"127.0.0.1:45928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1513 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	
	
	==> kernel <==
	 23:52:44 up 5 min,  0 users,  load average: 2.15, 2.04, 1.04
	Linux addons-331285 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8037b2280b2c53c1553319dd93d7047a30ee67f4565d27dab6cfa3a4a8837956] <==
	E0906 23:49:55.156599       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0906 23:49:55.217752       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0906 23:49:55.234913       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0906 23:49:57.524064       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.203.44"}
	E0906 23:50:10.227728       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0906 23:50:16.527187       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0906 23:50:16.718886       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 23:50:16.743275       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.144.150"}
	I0906 23:50:29.876304       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0906 23:50:38.257189       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:50:38.257332       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:50:38.328508       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:50:38.328596       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:50:38.440049       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:50:38.440204       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 23:50:38.462564       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 23:50:38.462725       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 23:50:39.351016       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 23:50:39.463743       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 23:50:39.612970       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0906 23:50:45.607887       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0906 23:50:55.118516       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0906 23:51:56.825079       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0906 23:52:23.865165       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0906 23:52:42.455661       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.1.10"}
	
	
	==> kube-controller-manager [da9de2c4fcc08ac3b64fb6e3310de6429a49c53f058ad251c09e54a4d27338f1] <==
	E0906 23:50:53.390573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:50:58.127152       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:50:58.128692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:50:59.314955       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:50:59.315930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0906 23:51:03.140225       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0906 23:51:03.140365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0906 23:51:03.203074       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0906 23:51:03.203132       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0906 23:51:11.320464       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:51:11.321514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:51:14.712809       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:51:14.714225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:51:18.481733       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:51:18.482735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:51:46.601795       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:51:46.603097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:51:49.397749       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:51:49.398803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:51:55.037884       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:51:55.038994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:52:28.078193       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:52:28.079499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0906 23:52:35.747162       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0906 23:52:35.748993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [5d8af6ac1753522d4476f0e7b97360e0c2e628ab07610f81d449e41f416e45c3] <==
	I0906 23:47:35.745509       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0906 23:47:35.849776       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0906 23:47:35.849917       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.179"]
	E0906 23:47:35.850017       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 23:47:36.272782       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0906 23:47:36.272845       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 23:47:36.272868       1 server_linux.go:132] "Using iptables Proxier"
	I0906 23:47:36.300795       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 23:47:36.303382       1 server.go:527] "Version info" version="v1.34.0"
	I0906 23:47:36.304458       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:47:36.336702       1 config.go:200] "Starting service config controller"
	I0906 23:47:36.336899       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0906 23:47:36.338588       1 config.go:106] "Starting endpoint slice config controller"
	I0906 23:47:36.343518       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0906 23:47:36.338795       1 config.go:403] "Starting serviceCIDR config controller"
	I0906 23:47:36.343563       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0906 23:47:36.342719       1 config.go:309] "Starting node config controller"
	I0906 23:47:36.343607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0906 23:47:36.343686       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0906 23:47:36.440806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0906 23:47:36.444779       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0906 23:47:36.444804       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [35340203ba4bade23403aaeebfd3d94ed4e5ed6a6e6db4e3cbb900fb8354cc5f] <==
	E0906 23:47:26.082592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0906 23:47:26.085988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0906 23:47:26.086053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0906 23:47:26.081363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0906 23:47:26.082526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0906 23:47:26.087359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0906 23:47:26.087460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0906 23:47:26.087563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0906 23:47:26.087688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0906 23:47:26.087787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0906 23:47:26.087874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0906 23:47:26.943856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0906 23:47:26.957753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0906 23:47:27.029388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0906 23:47:27.039476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0906 23:47:27.071174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0906 23:47:27.100418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0906 23:47:27.198421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0906 23:47:27.217035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0906 23:47:27.222518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0906 23:47:27.222518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0906 23:47:27.238922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0906 23:47:27.269823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0906 23:47:27.276796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0906 23:47:29.369540       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 06 23:50:59 addons-331285 kubelet[1510]: E0906 23:50:59.637408    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202659637040627  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:50:59 addons-331285 kubelet[1510]: E0906 23:50:59.637460    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202659637040627  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:09 addons-331285 kubelet[1510]: E0906 23:51:09.642138    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202669641464707  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:09 addons-331285 kubelet[1510]: E0906 23:51:09.642183    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202669641464707  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:19 addons-331285 kubelet[1510]: E0906 23:51:19.645520    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202679644510957  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:19 addons-331285 kubelet[1510]: E0906 23:51:19.645548    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202679644510957  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:29 addons-331285 kubelet[1510]: E0906 23:51:29.649013    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202689648289563  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:29 addons-331285 kubelet[1510]: E0906 23:51:29.649070    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202689648289563  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:39 addons-331285 kubelet[1510]: E0906 23:51:39.652170    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202699651776303  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:39 addons-331285 kubelet[1510]: E0906 23:51:39.652212    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202699651776303  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:49 addons-331285 kubelet[1510]: E0906 23:51:49.655713    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202709654938742  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:49 addons-331285 kubelet[1510]: E0906 23:51:49.655742    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202709654938742  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:59 addons-331285 kubelet[1510]: E0906 23:51:59.659512    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202719658802641  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:51:59 addons-331285 kubelet[1510]: E0906 23:51:59.659545    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202719658802641  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:01 addons-331285 kubelet[1510]: I0906 23:52:01.805930    1510 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 23:52:08 addons-331285 kubelet[1510]: I0906 23:52:08.806974    1510 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-z9zkw" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 23:52:09 addons-331285 kubelet[1510]: E0906 23:52:09.662304    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202729661717456  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:09 addons-331285 kubelet[1510]: E0906 23:52:09.662358    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202729661717456  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:19 addons-331285 kubelet[1510]: E0906 23:52:19.665390    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202739665034931  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:19 addons-331285 kubelet[1510]: E0906 23:52:19.665446    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202739665034931  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:29 addons-331285 kubelet[1510]: E0906 23:52:29.668797    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202749668331152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:29 addons-331285 kubelet[1510]: E0906 23:52:29.668826    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202749668331152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:39 addons-331285 kubelet[1510]: E0906 23:52:39.672034    1510 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757202759671569999  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:39 addons-331285 kubelet[1510]: E0906 23:52:39.672067    1510 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757202759671569999  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 06 23:52:42 addons-331285 kubelet[1510]: I0906 23:52:42.443414    1510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grmfz\" (UniqueName: \"kubernetes.io/projected/6798a3d3-e72a-4664-9033-87b9fe51173c-kube-api-access-grmfz\") pod \"hello-world-app-5d498dc89-96n4f\" (UID: \"6798a3d3-e72a-4664-9033-87b9fe51173c\") " pod="default/hello-world-app-5d498dc89-96n4f"
	
	
	==> storage-provisioner [07e36b4b2724d8f0abfee562d33ba48d531b8800b85cd89e6490aeea753f97a6] <==
	W0906 23:52:18.900505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:20.904261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:20.912813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:22.916416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:22.921881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:24.925431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:24.931229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:26.935416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:26.941383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:28.947384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:28.957106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:30.962235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:30.970470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:32.975736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:32.982602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:34.986744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:34.992198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:36.995869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:37.004249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:39.007945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:39.014296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:41.019521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:41.029138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:43.034850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0906 23:52:43.043528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-331285 -n addons-331285
helpers_test.go:269: (dbg) Run:  kubectl --context addons-331285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-956t7 ingress-nginx-admission-patch-l52r5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-331285 describe pod ingress-nginx-admission-create-956t7 ingress-nginx-admission-patch-l52r5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-331285 describe pod ingress-nginx-admission-create-956t7 ingress-nginx-admission-patch-l52r5: exit status 1 (61.625781ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-956t7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l52r5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-331285 describe pod ingress-nginx-admission-create-956t7 ingress-nginx-admission-patch-l52r5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable ingress-dns --alsologtostderr -v=1: (1.162036803s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable ingress --alsologtostderr -v=1: (7.831065232s)
--- FAIL: TestAddons/parallel/Ingress (158.10s)

                                                
                                    
x
+
TestPreload (175.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-638389 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0907 00:43:40.803534  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-638389 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m46.206687833s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-638389 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-638389 image pull gcr.io/k8s-minikube/busybox: (2.52503795s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-638389
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-638389: (7.310814071s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-638389 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0907 00:44:25.452891  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-638389 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (55.734287827s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-638389 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-07 00:44:52.346320885 +0000 UTC m=+3505.376315893
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-638389 -n test-preload-638389
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-638389 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-638389 logs -n 25: (1.182419352s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-665460 ssh -n multinode-665460-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:29 UTC │ 07 Sep 25 00:29 UTC │
	│ ssh     │ multinode-665460 ssh -n multinode-665460 sudo cat /home/docker/cp-test_multinode-665460-m03_multinode-665460.txt                                          │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:29 UTC │ 07 Sep 25 00:29 UTC │
	│ cp      │ multinode-665460 cp multinode-665460-m03:/home/docker/cp-test.txt multinode-665460-m02:/home/docker/cp-test_multinode-665460-m03_multinode-665460-m02.txt │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:29 UTC │ 07 Sep 25 00:29 UTC │
	│ ssh     │ multinode-665460 ssh -n multinode-665460-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:29 UTC │ 07 Sep 25 00:29 UTC │
	│ ssh     │ multinode-665460 ssh -n multinode-665460-m02 sudo cat /home/docker/cp-test_multinode-665460-m03_multinode-665460-m02.txt                                  │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:29 UTC │ 07 Sep 25 00:29 UTC │
	│ node    │ multinode-665460 node stop m03                                                                                                                            │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:29 UTC │ 07 Sep 25 00:29 UTC │
	│ node    │ multinode-665460 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:29 UTC │ 07 Sep 25 00:30 UTC │
	│ node    │ list -p multinode-665460                                                                                                                                  │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:30 UTC │                     │
	│ stop    │ -p multinode-665460                                                                                                                                       │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:30 UTC │ 07 Sep 25 00:33 UTC │
	│ start   │ -p multinode-665460 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:33 UTC │ 07 Sep 25 00:36 UTC │
	│ node    │ list -p multinode-665460                                                                                                                                  │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │                     │
	│ node    │ multinode-665460 node delete m03                                                                                                                          │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:36 UTC │
	│ stop    │ multinode-665460 stop                                                                                                                                     │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:36 UTC │ 07 Sep 25 00:39 UTC │
	│ start   │ -p multinode-665460 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:39 UTC │ 07 Sep 25 00:41 UTC │
	│ node    │ list -p multinode-665460                                                                                                                                  │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:41 UTC │                     │
	│ start   │ -p multinode-665460-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-665460-m02 │ jenkins │ v1.36.0 │ 07 Sep 25 00:41 UTC │                     │
	│ start   │ -p multinode-665460-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-665460-m03 │ jenkins │ v1.36.0 │ 07 Sep 25 00:41 UTC │ 07 Sep 25 00:41 UTC │
	│ node    │ add -p multinode-665460                                                                                                                                   │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:41 UTC │                     │
	│ delete  │ -p multinode-665460-m03                                                                                                                                   │ multinode-665460-m03 │ jenkins │ v1.36.0 │ 07 Sep 25 00:41 UTC │ 07 Sep 25 00:41 UTC │
	│ delete  │ -p multinode-665460                                                                                                                                       │ multinode-665460     │ jenkins │ v1.36.0 │ 07 Sep 25 00:41 UTC │ 07 Sep 25 00:42 UTC │
	│ start   │ -p test-preload-638389 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-638389  │ jenkins │ v1.36.0 │ 07 Sep 25 00:42 UTC │ 07 Sep 25 00:43 UTC │
	│ image   │ test-preload-638389 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-638389  │ jenkins │ v1.36.0 │ 07 Sep 25 00:43 UTC │ 07 Sep 25 00:43 UTC │
	│ stop    │ -p test-preload-638389                                                                                                                                    │ test-preload-638389  │ jenkins │ v1.36.0 │ 07 Sep 25 00:43 UTC │ 07 Sep 25 00:43 UTC │
	│ start   │ -p test-preload-638389 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-638389  │ jenkins │ v1.36.0 │ 07 Sep 25 00:43 UTC │ 07 Sep 25 00:44 UTC │
	│ image   │ test-preload-638389 image list                                                                                                                            │ test-preload-638389  │ jenkins │ v1.36.0 │ 07 Sep 25 00:44 UTC │ 07 Sep 25 00:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:43:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:43:56.417837  163749 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:43:56.417949  163749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:43:56.417961  163749 out.go:374] Setting ErrFile to fd 2...
	I0907 00:43:56.417968  163749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:43:56.418150  163749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:43:56.418764  163749 out.go:368] Setting JSON to false
	I0907 00:43:56.419707  163749 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5179,"bootTime":1757200657,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:43:56.419819  163749 start.go:140] virtualization: kvm guest
	I0907 00:43:56.421906  163749 out.go:179] * [test-preload-638389] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:43:56.423314  163749 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:43:56.423343  163749 notify.go:220] Checking for updates...
	I0907 00:43:56.425661  163749 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:43:56.426776  163749 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0907 00:43:56.427801  163749 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:43:56.428986  163749 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:43:56.430172  163749 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:43:56.431813  163749 config.go:182] Loaded profile config "test-preload-638389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0907 00:43:56.432444  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:43:56.432514  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:43:56.448239  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0907 00:43:56.448801  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:43:56.449460  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:43:56.449488  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:43:56.449853  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:43:56.450083  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:43:56.452023  163749 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0907 00:43:56.453335  163749 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:43:56.453624  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:43:56.453663  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:43:56.468909  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0907 00:43:56.469388  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:43:56.469867  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:43:56.469893  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:43:56.470261  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:43:56.470462  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:43:56.507306  163749 out.go:179] * Using the kvm2 driver based on existing profile
	I0907 00:43:56.508649  163749 start.go:304] selected driver: kvm2
	I0907 00:43:56.508671  163749 start.go:918] validating driver "kvm2" against &{Name:test-preload-638389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-638389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:43:56.508821  163749 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:43:56.509789  163749 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:43:56.509863  163749 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21132-128697/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:43:56.525909  163749 install.go:137] /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0907 00:43:56.526346  163749 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:43:56.526380  163749 cni.go:84] Creating CNI manager for ""
	I0907 00:43:56.526424  163749 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:43:56.526478  163749 start.go:348] cluster config:
	{Name:test-preload-638389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-638389 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:43:56.526579  163749 iso.go:125] acquiring lock: {Name:mk3bd5f7fbe7836651644a94b41f2b6111c9b69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:43:56.528730  163749 out.go:179] * Starting "test-preload-638389" primary control-plane node in "test-preload-638389" cluster
	I0907 00:43:56.530015  163749 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0907 00:43:56.553241  163749 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0907 00:43:56.553272  163749 cache.go:58] Caching tarball of preloaded images
	I0907 00:43:56.553435  163749 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0907 00:43:56.555407  163749 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0907 00:43:56.556690  163749 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0907 00:43:56.586539  163749 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0907 00:43:59.222734  163749 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0907 00:43:59.222864  163749 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0907 00:43:59.990845  163749 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0907 00:43:59.990994  163749 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/config.json ...
	I0907 00:43:59.991227  163749 start.go:360] acquireMachinesLock for test-preload-638389: {Name:mk3b58ef42f26d446b63d531f457f6ac8953e3f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:43:59.991294  163749 start.go:364] duration metric: took 44.963µs to acquireMachinesLock for "test-preload-638389"
	I0907 00:43:59.991310  163749 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:43:59.991316  163749 fix.go:54] fixHost starting: 
	I0907 00:43:59.991563  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:43:59.991600  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:00.007615  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36673
	I0907 00:44:00.008150  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:00.008644  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:44:00.008674  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:00.009158  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:00.009397  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:00.009531  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetState
	I0907 00:44:00.011458  163749 fix.go:112] recreateIfNeeded on test-preload-638389: state=Stopped err=<nil>
	I0907 00:44:00.011488  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	W0907 00:44:00.011669  163749 fix.go:138] unexpected machine state, will restart: <nil>
	I0907 00:44:00.013791  163749 out.go:252] * Restarting existing kvm2 VM for "test-preload-638389" ...
	I0907 00:44:00.013834  163749 main.go:141] libmachine: (test-preload-638389) Calling .Start
	I0907 00:44:00.014064  163749 main.go:141] libmachine: (test-preload-638389) starting domain...
	I0907 00:44:00.014090  163749 main.go:141] libmachine: (test-preload-638389) ensuring networks are active...
	I0907 00:44:00.015214  163749 main.go:141] libmachine: (test-preload-638389) Ensuring network default is active
	I0907 00:44:00.015568  163749 main.go:141] libmachine: (test-preload-638389) Ensuring network mk-test-preload-638389 is active
	I0907 00:44:00.015900  163749 main.go:141] libmachine: (test-preload-638389) getting domain XML...
	I0907 00:44:00.016844  163749 main.go:141] libmachine: (test-preload-638389) creating domain...
	I0907 00:44:00.369197  163749 main.go:141] libmachine: (test-preload-638389) waiting for IP...
	I0907 00:44:00.370220  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:00.370630  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:00.370703  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:00.370602  163801 retry.go:31] will retry after 202.329513ms: waiting for domain to come up
	I0907 00:44:00.574991  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:00.575472  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:00.575532  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:00.575443  163801 retry.go:31] will retry after 378.359498ms: waiting for domain to come up
	I0907 00:44:00.955499  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:00.956056  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:00.956086  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:00.956044  163801 retry.go:31] will retry after 322.995231ms: waiting for domain to come up
	I0907 00:44:01.280690  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:01.281228  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:01.281253  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:01.281189  163801 retry.go:31] will retry after 586.176651ms: waiting for domain to come up
	I0907 00:44:01.869053  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:01.869557  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:01.869590  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:01.869522  163801 retry.go:31] will retry after 583.078504ms: waiting for domain to come up
	I0907 00:44:02.453972  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:02.454358  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:02.454394  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:02.454319  163801 retry.go:31] will retry after 653.044516ms: waiting for domain to come up
	I0907 00:44:03.109118  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:03.109549  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:03.109576  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:03.109495  163801 retry.go:31] will retry after 955.539965ms: waiting for domain to come up
	I0907 00:44:04.066608  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:04.067165  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:04.067191  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:04.067122  163801 retry.go:31] will retry after 1.371998269s: waiting for domain to come up
	I0907 00:44:05.441461  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:05.441973  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:05.442105  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:05.441958  163801 retry.go:31] will retry after 1.808307175s: waiting for domain to come up
	I0907 00:44:07.253161  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:07.253572  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:07.253599  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:07.253541  163801 retry.go:31] will retry after 1.650718029s: waiting for domain to come up
	I0907 00:44:08.906016  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:08.906368  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:08.906434  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:08.906306  163801 retry.go:31] will retry after 1.811859863s: waiting for domain to come up
	I0907 00:44:10.719840  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:10.720342  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:10.720371  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:10.720286  163801 retry.go:31] will retry after 2.830586925s: waiting for domain to come up
	I0907 00:44:13.554047  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:13.554485  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:13.554516  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:13.554439  163801 retry.go:31] will retry after 3.172764311s: waiting for domain to come up
	I0907 00:44:16.728519  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:16.728942  163749 main.go:141] libmachine: (test-preload-638389) DBG | unable to find current IP address of domain test-preload-638389 in network mk-test-preload-638389
	I0907 00:44:16.728970  163749 main.go:141] libmachine: (test-preload-638389) DBG | I0907 00:44:16.728918  163801 retry.go:31] will retry after 5.412800899s: waiting for domain to come up
	I0907 00:44:22.145052  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.145564  163749 main.go:141] libmachine: (test-preload-638389) found domain IP: 192.168.39.172
	I0907 00:44:22.145584  163749 main.go:141] libmachine: (test-preload-638389) reserving static IP address...
	I0907 00:44:22.145603  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has current primary IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.146371  163749 main.go:141] libmachine: (test-preload-638389) reserved static IP address 192.168.39.172 for domain test-preload-638389
	I0907 00:44:22.146429  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "test-preload-638389", mac: "52:54:00:2c:df:3b", ip: "192.168.39.172"} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.146444  163749 main.go:141] libmachine: (test-preload-638389) waiting for SSH...
	I0907 00:44:22.146472  163749 main.go:141] libmachine: (test-preload-638389) DBG | skip adding static IP to network mk-test-preload-638389 - found existing host DHCP lease matching {name: "test-preload-638389", mac: "52:54:00:2c:df:3b", ip: "192.168.39.172"}
	I0907 00:44:22.146486  163749 main.go:141] libmachine: (test-preload-638389) DBG | Getting to WaitForSSH function...
	I0907 00:44:22.148611  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.148953  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.148985  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.149091  163749 main.go:141] libmachine: (test-preload-638389) DBG | Using SSH client type: external
	I0907 00:44:22.149130  163749 main.go:141] libmachine: (test-preload-638389) DBG | Using SSH private key: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa (-rw-------)
	I0907 00:44:22.149175  163749 main.go:141] libmachine: (test-preload-638389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0907 00:44:22.149196  163749 main.go:141] libmachine: (test-preload-638389) DBG | About to run SSH command:
	I0907 00:44:22.149222  163749 main.go:141] libmachine: (test-preload-638389) DBG | exit 0
	I0907 00:44:22.281288  163749 main.go:141] libmachine: (test-preload-638389) DBG | SSH cmd err, output: <nil>: 
	I0907 00:44:22.281678  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetConfigRaw
	I0907 00:44:22.282382  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetIP
	I0907 00:44:22.284967  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.285449  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.285492  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.285703  163749 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/config.json ...
	I0907 00:44:22.285944  163749 machine.go:93] provisionDockerMachine start ...
	I0907 00:44:22.285966  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:22.286258  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:22.288738  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.289104  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.289127  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.289268  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:22.289435  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:22.289550  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:22.289710  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:22.289851  163749 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:22.290143  163749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0907 00:44:22.290155  163749 main.go:141] libmachine: About to run SSH command:
	hostname
	I0907 00:44:22.410119  163749 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0907 00:44:22.410154  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetMachineName
	I0907 00:44:22.410459  163749 buildroot.go:166] provisioning hostname "test-preload-638389"
	I0907 00:44:22.410494  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetMachineName
	I0907 00:44:22.410740  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:22.414421  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.414887  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.414926  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.415164  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:22.415388  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:22.415639  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:22.415828  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:22.415996  163749 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:22.416294  163749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0907 00:44:22.416314  163749 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-638389 && echo "test-preload-638389" | sudo tee /etc/hostname
	I0907 00:44:22.553274  163749 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-638389
	
	I0907 00:44:22.553317  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:22.556306  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.556662  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.556692  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.556909  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:22.557171  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:22.557384  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:22.557543  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:22.557674  163749 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:22.557920  163749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0907 00:44:22.557943  163749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-638389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-638389/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-638389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:44:22.686420  163749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:44:22.686456  163749 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21132-128697/.minikube CaCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21132-128697/.minikube}
	I0907 00:44:22.686485  163749 buildroot.go:174] setting up certificates
	I0907 00:44:22.686499  163749 provision.go:84] configureAuth start
	I0907 00:44:22.686512  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetMachineName
	I0907 00:44:22.686870  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetIP
	I0907 00:44:22.690000  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.690401  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.690439  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.690605  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:22.693366  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.693706  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:22.693739  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:22.693912  163749 provision.go:143] copyHostCerts
	I0907 00:44:22.693985  163749 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-128697/.minikube/ca.pem, removing ...
	I0907 00:44:22.694009  163749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.pem
	I0907 00:44:22.694085  163749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/ca.pem (1082 bytes)
	I0907 00:44:22.694201  163749 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-128697/.minikube/cert.pem, removing ...
	I0907 00:44:22.694213  163749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-128697/.minikube/cert.pem
	I0907 00:44:22.694243  163749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/cert.pem (1123 bytes)
	I0907 00:44:22.694318  163749 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-128697/.minikube/key.pem, removing ...
	I0907 00:44:22.694326  163749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-128697/.minikube/key.pem
	I0907 00:44:22.694360  163749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/key.pem (1679 bytes)
	I0907 00:44:22.694433  163749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem org=jenkins.test-preload-638389 san=[127.0.0.1 192.168.39.172 localhost minikube test-preload-638389]
	I0907 00:44:23.154946  163749 provision.go:177] copyRemoteCerts
	I0907 00:44:23.155021  163749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:44:23.155052  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:23.157881  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.158172  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:23.158206  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.158344  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:23.158563  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.158715  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:23.158839  163749 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa Username:docker}
	I0907 00:44:23.249503  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:44:23.281267  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0907 00:44:23.313952  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0907 00:44:23.347102  163749 provision.go:87] duration metric: took 660.585009ms to configureAuth
	I0907 00:44:23.347139  163749 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:44:23.347377  163749 config.go:182] Loaded profile config "test-preload-638389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0907 00:44:23.347473  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:23.350523  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.350953  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:23.350987  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.351194  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:23.351417  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.351586  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.351688  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:23.351850  163749 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:23.352132  163749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0907 00:44:23.352157  163749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:44:23.611615  163749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:44:23.611646  163749 machine.go:96] duration metric: took 1.32568768s to provisionDockerMachine
	I0907 00:44:23.611660  163749 start.go:293] postStartSetup for "test-preload-638389" (driver="kvm2")
	I0907 00:44:23.611670  163749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:44:23.611697  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:23.612070  163749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:44:23.612133  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:23.614997  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.615588  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:23.615612  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.615835  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:23.616061  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.616207  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:23.616345  163749 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa Username:docker}
	I0907 00:44:23.707227  163749 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:44:23.713458  163749 info.go:137] Remote host: Buildroot 2025.02
	I0907 00:44:23.713487  163749 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-128697/.minikube/addons for local assets ...
	I0907 00:44:23.713572  163749 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-128697/.minikube/files for local assets ...
	I0907 00:44:23.713648  163749 filesync.go:149] local asset: /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem -> 1330252.pem in /etc/ssl/certs
	I0907 00:44:23.713737  163749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:44:23.726732  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem --> /etc/ssl/certs/1330252.pem (1708 bytes)
	I0907 00:44:23.759728  163749 start.go:296] duration metric: took 148.049215ms for postStartSetup
	I0907 00:44:23.759779  163749 fix.go:56] duration metric: took 23.768463137s for fixHost
	I0907 00:44:23.759807  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:23.762706  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.763137  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:23.763170  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.763331  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:23.763513  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.763659  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.763778  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:23.763978  163749 main.go:141] libmachine: Using SSH client type: native
	I0907 00:44:23.764200  163749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0907 00:44:23.764210  163749 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0907 00:44:23.882693  163749 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757205863.856783703
	
	I0907 00:44:23.882721  163749 fix.go:216] guest clock: 1757205863.856783703
	I0907 00:44:23.882730  163749 fix.go:229] Guest: 2025-09-07 00:44:23.856783703 +0000 UTC Remote: 2025-09-07 00:44:23.75978409 +0000 UTC m=+27.382678940 (delta=96.999613ms)
	I0907 00:44:23.882783  163749 fix.go:200] guest clock delta is within tolerance: 96.999613ms
	I0907 00:44:23.882791  163749 start.go:83] releasing machines lock for "test-preload-638389", held for 23.891485245s
	I0907 00:44:23.882814  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:23.883115  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetIP
	I0907 00:44:23.886170  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.886556  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:23.886582  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.886779  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:23.887481  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:23.887668  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:23.887767  163749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:44:23.887832  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:23.887904  163749 ssh_runner.go:195] Run: cat /version.json
	I0907 00:44:23.887942  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:23.890668  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.890979  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:23.891016  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.891037  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.891186  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:23.891364  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.891463  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:23.891497  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:23.891534  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:23.891672  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:23.891682  163749 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa Username:docker}
	I0907 00:44:23.891835  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:23.891980  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:23.892136  163749 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa Username:docker}
	I0907 00:44:24.000124  163749 ssh_runner.go:195] Run: systemctl --version
	I0907 00:44:24.006976  163749 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:44:24.154081  163749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:44:24.161400  163749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:44:24.161471  163749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:44:24.184346  163749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0907 00:44:24.184377  163749 start.go:495] detecting cgroup driver to use...
	I0907 00:44:24.184444  163749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:44:24.205326  163749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:44:24.223294  163749 docker.go:218] disabling cri-docker service (if available) ...
	I0907 00:44:24.223355  163749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:44:24.240572  163749 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:44:24.257617  163749 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:44:24.405853  163749 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:44:24.552047  163749 docker.go:234] disabling docker service ...
	I0907 00:44:24.552130  163749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:44:24.569755  163749 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:44:24.586524  163749 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:44:24.808664  163749 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:44:24.955010  163749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:44:24.972767  163749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:44:24.997231  163749 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0907 00:44:24.997307  163749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:25.010962  163749 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:44:25.011065  163749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:25.024998  163749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:25.039375  163749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:25.053084  163749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:44:25.068071  163749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:25.081926  163749 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:25.105089  163749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:44:25.119438  163749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:44:25.130723  163749 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0907 00:44:25.130812  163749 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0907 00:44:25.152902  163749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:44:25.166033  163749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:44:25.310834  163749 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:44:25.441964  163749 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:44:25.442060  163749 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:44:25.448420  163749 start.go:563] Will wait 60s for crictl version
	I0907 00:44:25.448492  163749 ssh_runner.go:195] Run: which crictl
	I0907 00:44:25.453718  163749 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:44:25.496869  163749 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0907 00:44:25.496964  163749 ssh_runner.go:195] Run: crio --version
	I0907 00:44:25.527460  163749 ssh_runner.go:195] Run: crio --version
	I0907 00:44:25.559069  163749 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0907 00:44:25.560432  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetIP
	I0907 00:44:25.563207  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:25.563624  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:25.563672  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:25.563979  163749 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0907 00:44:25.568643  163749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:44:25.584265  163749 kubeadm.go:875] updating cluster {Name:test-preload-638389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-638389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0907 00:44:25.584405  163749 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0907 00:44:25.584468  163749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:44:25.626295  163749 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0907 00:44:25.626384  163749 ssh_runner.go:195] Run: which lz4
	I0907 00:44:25.631081  163749 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0907 00:44:25.636428  163749 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0907 00:44:25.636464  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0907 00:44:27.274607  163749 crio.go:462] duration metric: took 1.643566457s to copy over tarball
	I0907 00:44:27.274689  163749 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0907 00:44:29.068258  163749 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.793533957s)
	I0907 00:44:29.068292  163749 crio.go:469] duration metric: took 1.793650972s to extract the tarball
	I0907 00:44:29.068304  163749 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0907 00:44:29.109640  163749 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:44:29.152939  163749 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:44:29.152965  163749 cache_images.go:85] Images are preloaded, skipping loading
	I0907 00:44:29.152974  163749 kubeadm.go:926] updating node { 192.168.39.172 8443 v1.32.0 crio true true} ...
	I0907 00:44:29.153086  163749 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-638389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-638389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0907 00:44:29.153148  163749 ssh_runner.go:195] Run: crio config
	I0907 00:44:29.202186  163749 cni.go:84] Creating CNI manager for ""
	I0907 00:44:29.202214  163749 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:44:29.202229  163749 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0907 00:44:29.202257  163749 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-638389 NodeName:test-preload-638389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:44:29.202421  163749 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-638389"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:44:29.202522  163749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0907 00:44:29.214976  163749 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:44:29.215046  163749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:44:29.227319  163749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0907 00:44:29.250372  163749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:44:29.273169  163749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0907 00:44:29.296193  163749 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0907 00:44:29.300945  163749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0907 00:44:29.316958  163749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:44:29.462440  163749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:44:29.484219  163749 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389 for IP: 192.168.39.172
	I0907 00:44:29.484252  163749 certs.go:194] generating shared ca certs ...
	I0907 00:44:29.484277  163749 certs.go:226] acquiring lock for ca certs: {Name:mk640ab940eb4d822d1f15a5cd2b466b6472cad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:44:29.484459  163749 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key
	I0907 00:44:29.484517  163749 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key
	I0907 00:44:29.484532  163749 certs.go:256] generating profile certs ...
	I0907 00:44:29.484630  163749 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/client.key
	I0907 00:44:29.484718  163749 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/apiserver.key.da87bbc1
	I0907 00:44:29.484791  163749 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/proxy-client.key
	I0907 00:44:29.484926  163749 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem (1338 bytes)
	W0907 00:44:29.484967  163749 certs.go:480] ignoring /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025_empty.pem, impossibly tiny 0 bytes
	I0907 00:44:29.484979  163749 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:44:29.485012  163749 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:44:29.485047  163749 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:44:29.485097  163749 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem (1679 bytes)
	I0907 00:44:29.485156  163749 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem (1708 bytes)
	I0907 00:44:29.485775  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:44:29.528522  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:44:29.562878  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:44:29.595091  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:44:29.627325  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0907 00:44:29.661319  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:44:29.695389  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:44:29.727506  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0907 00:44:29.758367  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem --> /usr/share/ca-certificates/133025.pem (1338 bytes)
	I0907 00:44:29.788707  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem --> /usr/share/ca-certificates/1330252.pem (1708 bytes)
	I0907 00:44:29.820230  163749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:44:29.853735  163749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:44:29.876658  163749 ssh_runner.go:195] Run: openssl version
	I0907 00:44:29.883628  163749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:44:29.898283  163749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:44:29.904078  163749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:44:29.904158  163749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:44:29.912411  163749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:44:29.927187  163749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/133025.pem && ln -fs /usr/share/ca-certificates/133025.pem /etc/ssl/certs/133025.pem"
	I0907 00:44:29.942412  163749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133025.pem
	I0907 00:44:29.948391  163749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:55 /usr/share/ca-certificates/133025.pem
	I0907 00:44:29.948470  163749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133025.pem
	I0907 00:44:29.956735  163749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/133025.pem /etc/ssl/certs/51391683.0"
	I0907 00:44:29.971737  163749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1330252.pem && ln -fs /usr/share/ca-certificates/1330252.pem /etc/ssl/certs/1330252.pem"
	I0907 00:44:29.986652  163749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1330252.pem
	I0907 00:44:29.992592  163749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:55 /usr/share/ca-certificates/1330252.pem
	I0907 00:44:29.992676  163749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1330252.pem
	I0907 00:44:30.000442  163749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1330252.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:44:30.014407  163749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0907 00:44:30.020535  163749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:44:30.029195  163749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:44:30.037569  163749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:44:30.046210  163749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:44:30.054751  163749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:44:30.063169  163749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:44:30.071323  163749 kubeadm.go:392] StartCluster: {Name:test-preload-638389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-638389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:44:30.071447  163749 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:44:30.071508  163749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:44:30.114180  163749 cri.go:89] found id: ""
	I0907 00:44:30.114288  163749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0907 00:44:30.126931  163749 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0907 00:44:30.126954  163749 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0907 00:44:30.127004  163749 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0907 00:44:30.139791  163749 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:44:30.140221  163749 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-638389" does not appear in /home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0907 00:44:30.140321  163749 kubeconfig.go:62] /home/jenkins/minikube-integration/21132-128697/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-638389" cluster setting kubeconfig missing "test-preload-638389" context setting]
	I0907 00:44:30.140667  163749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/kubeconfig: {Name:mk63d1fc2221fbf03163b06fbb544f3ee799299f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:44:30.141260  163749 kapi.go:59] client config for test-preload-638389: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/client.crt", KeyFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/client.key", CAFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:44:30.141723  163749 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0907 00:44:30.141739  163749 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0907 00:44:30.141746  163749 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0907 00:44:30.141754  163749 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0907 00:44:30.141761  163749 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0907 00:44:30.142101  163749 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0907 00:44:30.153959  163749 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.172
	I0907 00:44:30.154001  163749 kubeadm.go:1152] stopping kube-system containers ...
	I0907 00:44:30.154015  163749 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0907 00:44:30.154072  163749 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:44:30.208452  163749 cri.go:89] found id: ""
	I0907 00:44:30.208538  163749 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0907 00:44:30.237028  163749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0907 00:44:30.254643  163749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0907 00:44:30.254672  163749 kubeadm.go:157] found existing configuration files:
	
	I0907 00:44:30.254738  163749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0907 00:44:30.266686  163749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0907 00:44:30.266762  163749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0907 00:44:30.279516  163749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0907 00:44:30.291272  163749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0907 00:44:30.291343  163749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0907 00:44:30.303763  163749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0907 00:44:30.315663  163749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0907 00:44:30.315741  163749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0907 00:44:30.328294  163749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0907 00:44:30.340043  163749 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0907 00:44:30.340117  163749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0907 00:44:30.352717  163749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0907 00:44:30.365513  163749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:44:30.426206  163749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:44:31.675616  163749 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.249365316s)
	I0907 00:44:31.675657  163749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:44:31.937401  163749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:44:32.002617  163749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:44:32.087690  163749 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:44:32.087791  163749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:44:32.588074  163749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:44:33.087904  163749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:44:33.588651  163749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:44:33.618317  163749 api_server.go:72] duration metric: took 1.530627474s to wait for apiserver process to appear ...
	I0907 00:44:33.618354  163749 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:44:33.618376  163749 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0907 00:44:36.405244  163749 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:44:36.405309  163749 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:44:36.405344  163749 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0907 00:44:36.487819  163749 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0907 00:44:36.487858  163749 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0907 00:44:36.619199  163749 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0907 00:44:36.625799  163749 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0907 00:44:36.625848  163749 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0907 00:44:37.118479  163749 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0907 00:44:37.129024  163749 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0907 00:44:37.129066  163749 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0907 00:44:37.618787  163749 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0907 00:44:37.628114  163749 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0907 00:44:37.628144  163749 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0907 00:44:38.118812  163749 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0907 00:44:38.123462  163749 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0907 00:44:38.136607  163749 api_server.go:141] control plane version: v1.32.0
	I0907 00:44:38.136639  163749 api_server.go:131] duration metric: took 4.518278045s to wait for apiserver health ...
	I0907 00:44:38.136658  163749 cni.go:84] Creating CNI manager for ""
	I0907 00:44:38.136668  163749 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:44:38.138483  163749 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0907 00:44:38.139726  163749 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0907 00:44:38.158454  163749 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0907 00:44:38.190394  163749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:44:38.196528  163749 system_pods.go:59] 7 kube-system pods found
	I0907 00:44:38.196577  163749 system_pods.go:61] "coredns-668d6bf9bc-7bjbj" [41f73fc0-0d27-41e8-9863-3d273e29460c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:44:38.196586  163749 system_pods.go:61] "etcd-test-preload-638389" [f99cb7b5-8318-4582-96ee-97a81c586ffa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:44:38.196595  163749 system_pods.go:61] "kube-apiserver-test-preload-638389" [59435bc0-aabb-4578-813b-7fd69da4f46c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:44:38.196601  163749 system_pods.go:61] "kube-controller-manager-test-preload-638389" [a0794272-b480-4104-aada-fa481a6ccf91] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:44:38.196607  163749 system_pods.go:61] "kube-proxy-6ptrt" [dda967b3-0529-4d89-97eb-a04429d40c18] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0907 00:44:38.196612  163749 system_pods.go:61] "kube-scheduler-test-preload-638389" [2d1a9bf4-9936-4f99-8651-c5d815381e5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:44:38.196626  163749 system_pods.go:61] "storage-provisioner" [6f95c75e-7db0-442e-8b85-7f89ab18b7e6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0907 00:44:38.196641  163749 system_pods.go:74] duration metric: took 6.222737ms to wait for pod list to return data ...
	I0907 00:44:38.196650  163749 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:44:38.201583  163749 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0907 00:44:38.201618  163749 node_conditions.go:123] node cpu capacity is 2
	I0907 00:44:38.201632  163749 node_conditions.go:105] duration metric: took 4.977942ms to run NodePressure ...
	I0907 00:44:38.201662  163749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0907 00:44:38.486468  163749 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0907 00:44:38.492087  163749 kubeadm.go:735] kubelet initialised
	I0907 00:44:38.492125  163749 kubeadm.go:736] duration metric: took 5.618954ms waiting for restarted kubelet to initialise ...
	I0907 00:44:38.492146  163749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0907 00:44:38.515905  163749 ops.go:34] apiserver oom_adj: -16
	I0907 00:44:38.515938  163749 kubeadm.go:593] duration metric: took 8.388977847s to restartPrimaryControlPlane
	I0907 00:44:38.515954  163749 kubeadm.go:394] duration metric: took 8.444663343s to StartCluster
	I0907 00:44:38.515983  163749 settings.go:142] acquiring lock: {Name:mkd1edfb540d79a9fb2ef8a25e6ffcf2ec0c7ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:44:38.516094  163749 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0907 00:44:38.516997  163749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/kubeconfig: {Name:mk63d1fc2221fbf03163b06fbb544f3ee799299f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:44:38.517402  163749 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:44:38.517498  163749 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0907 00:44:38.517600  163749 addons.go:69] Setting storage-provisioner=true in profile "test-preload-638389"
	I0907 00:44:38.517610  163749 config.go:182] Loaded profile config "test-preload-638389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0907 00:44:38.517649  163749 addons.go:238] Setting addon storage-provisioner=true in "test-preload-638389"
	W0907 00:44:38.517663  163749 addons.go:247] addon storage-provisioner should already be in state true
	I0907 00:44:38.517655  163749 addons.go:69] Setting default-storageclass=true in profile "test-preload-638389"
	I0907 00:44:38.517691  163749 host.go:66] Checking if "test-preload-638389" exists ...
	I0907 00:44:38.517699  163749 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-638389"
	I0907 00:44:38.518035  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:44:38.518088  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:38.518140  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:44:38.518191  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:38.519542  163749 out.go:179] * Verifying Kubernetes components...
	I0907 00:44:38.520931  163749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:44:38.536121  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I0907 00:44:38.536130  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39485
	I0907 00:44:38.536664  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:38.536774  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:38.537311  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:44:38.537340  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:38.537439  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:44:38.537460  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:38.537770  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:38.537846  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:38.538045  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetState
	I0907 00:44:38.538373  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:44:38.538427  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:38.540711  163749 kapi.go:59] client config for test-preload-638389: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/client.crt", KeyFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/client.key", CAFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:44:38.541122  163749 addons.go:238] Setting addon default-storageclass=true in "test-preload-638389"
	W0907 00:44:38.541145  163749 addons.go:247] addon default-storageclass should already be in state true
	I0907 00:44:38.541180  163749 host.go:66] Checking if "test-preload-638389" exists ...
	I0907 00:44:38.541581  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:44:38.541640  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:38.555864  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I0907 00:44:38.556401  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:38.556926  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:44:38.556957  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:38.557449  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:38.557670  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetState
	I0907 00:44:38.559079  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0907 00:44:38.559579  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:38.559642  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:38.560209  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:44:38.560230  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:38.560601  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:38.561242  163749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:44:38.561300  163749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:44:38.561771  163749 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0907 00:44:38.563082  163749 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:44:38.563113  163749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0907 00:44:38.563137  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:38.566632  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:38.567135  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:38.567160  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:38.567435  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:38.567625  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:38.567749  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:38.567983  163749 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa Username:docker}
	I0907 00:44:38.579134  163749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0907 00:44:38.579615  163749 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:44:38.580200  163749 main.go:141] libmachine: Using API Version  1
	I0907 00:44:38.580245  163749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:44:38.580725  163749 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:44:38.581042  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetState
	I0907 00:44:38.582961  163749 main.go:141] libmachine: (test-preload-638389) Calling .DriverName
	I0907 00:44:38.583259  163749 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0907 00:44:38.583280  163749 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0907 00:44:38.583307  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHHostname
	I0907 00:44:38.586927  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:38.587459  163749 main.go:141] libmachine: (test-preload-638389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:df:3b", ip: ""} in network mk-test-preload-638389: {Iface:virbr1 ExpiryTime:2025-09-07 01:44:11 +0000 UTC Type:0 Mac:52:54:00:2c:df:3b Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-638389 Clientid:01:52:54:00:2c:df:3b}
	I0907 00:44:38.587493  163749 main.go:141] libmachine: (test-preload-638389) DBG | domain test-preload-638389 has defined IP address 192.168.39.172 and MAC address 52:54:00:2c:df:3b in network mk-test-preload-638389
	I0907 00:44:38.587618  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHPort
	I0907 00:44:38.587858  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHKeyPath
	I0907 00:44:38.588045  163749 main.go:141] libmachine: (test-preload-638389) Calling .GetSSHUsername
	I0907 00:44:38.588203  163749 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/test-preload-638389/id_rsa Username:docker}
	I0907 00:44:38.765708  163749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:44:38.791741  163749 node_ready.go:35] waiting up to 6m0s for node "test-preload-638389" to be "Ready" ...
	I0907 00:44:38.795627  163749 node_ready.go:49] node "test-preload-638389" is "Ready"
	I0907 00:44:38.795659  163749 node_ready.go:38] duration metric: took 3.857468ms for node "test-preload-638389" to be "Ready" ...
	I0907 00:44:38.795689  163749 api_server.go:52] waiting for apiserver process to appear ...
	I0907 00:44:38.795755  163749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:44:38.821050  163749 api_server.go:72] duration metric: took 303.596255ms to wait for apiserver process to appear ...
	I0907 00:44:38.821079  163749 api_server.go:88] waiting for apiserver healthz status ...
	I0907 00:44:38.821098  163749 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0907 00:44:38.826519  163749 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0907 00:44:38.827504  163749 api_server.go:141] control plane version: v1.32.0
	I0907 00:44:38.827526  163749 api_server.go:131] duration metric: took 6.440364ms to wait for apiserver health ...
	I0907 00:44:38.827535  163749 system_pods.go:43] waiting for kube-system pods to appear ...
	I0907 00:44:38.830282  163749 system_pods.go:59] 7 kube-system pods found
	I0907 00:44:38.830322  163749 system_pods.go:61] "coredns-668d6bf9bc-7bjbj" [41f73fc0-0d27-41e8-9863-3d273e29460c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:44:38.830329  163749 system_pods.go:61] "etcd-test-preload-638389" [f99cb7b5-8318-4582-96ee-97a81c586ffa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:44:38.830344  163749 system_pods.go:61] "kube-apiserver-test-preload-638389" [59435bc0-aabb-4578-813b-7fd69da4f46c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:44:38.830354  163749 system_pods.go:61] "kube-controller-manager-test-preload-638389" [a0794272-b480-4104-aada-fa481a6ccf91] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:44:38.830361  163749 system_pods.go:61] "kube-proxy-6ptrt" [dda967b3-0529-4d89-97eb-a04429d40c18] Running
	I0907 00:44:38.830367  163749 system_pods.go:61] "kube-scheduler-test-preload-638389" [2d1a9bf4-9936-4f99-8651-c5d815381e5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:44:38.830372  163749 system_pods.go:61] "storage-provisioner" [6f95c75e-7db0-442e-8b85-7f89ab18b7e6] Running
	I0907 00:44:38.830379  163749 system_pods.go:74] duration metric: took 2.838602ms to wait for pod list to return data ...
	I0907 00:44:38.830389  163749 default_sa.go:34] waiting for default service account to be created ...
	I0907 00:44:38.833984  163749 default_sa.go:45] found service account: "default"
	I0907 00:44:38.834025  163749 default_sa.go:55] duration metric: took 3.626684ms for default service account to be created ...
	I0907 00:44:38.834036  163749 system_pods.go:116] waiting for k8s-apps to be running ...
	I0907 00:44:38.836775  163749 system_pods.go:86] 7 kube-system pods found
	I0907 00:44:38.836819  163749 system_pods.go:89] "coredns-668d6bf9bc-7bjbj" [41f73fc0-0d27-41e8-9863-3d273e29460c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0907 00:44:38.836830  163749 system_pods.go:89] "etcd-test-preload-638389" [f99cb7b5-8318-4582-96ee-97a81c586ffa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0907 00:44:38.836860  163749 system_pods.go:89] "kube-apiserver-test-preload-638389" [59435bc0-aabb-4578-813b-7fd69da4f46c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0907 00:44:38.836872  163749 system_pods.go:89] "kube-controller-manager-test-preload-638389" [a0794272-b480-4104-aada-fa481a6ccf91] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0907 00:44:38.836877  163749 system_pods.go:89] "kube-proxy-6ptrt" [dda967b3-0529-4d89-97eb-a04429d40c18] Running
	I0907 00:44:38.836882  163749 system_pods.go:89] "kube-scheduler-test-preload-638389" [2d1a9bf4-9936-4f99-8651-c5d815381e5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0907 00:44:38.836888  163749 system_pods.go:89] "storage-provisioner" [6f95c75e-7db0-442e-8b85-7f89ab18b7e6] Running
	I0907 00:44:38.836896  163749 system_pods.go:126] duration metric: took 2.848978ms to wait for k8s-apps to be running ...
	I0907 00:44:38.836906  163749 system_svc.go:44] waiting for kubelet service to be running ....
	I0907 00:44:38.836963  163749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:44:38.854858  163749 system_svc.go:56] duration metric: took 17.937341ms WaitForService to wait for kubelet
	I0907 00:44:38.854895  163749 kubeadm.go:578] duration metric: took 337.448672ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:44:38.854916  163749 node_conditions.go:102] verifying NodePressure condition ...
	I0907 00:44:38.859050  163749 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0907 00:44:38.859078  163749 node_conditions.go:123] node cpu capacity is 2
	I0907 00:44:38.859089  163749 node_conditions.go:105] duration metric: took 4.167858ms to run NodePressure ...
	I0907 00:44:38.859104  163749 start.go:241] waiting for startup goroutines ...
	I0907 00:44:38.983512  163749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0907 00:44:38.989627  163749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0907 00:44:39.682957  163749 main.go:141] libmachine: Making call to close driver server
	I0907 00:44:39.682990  163749 main.go:141] libmachine: (test-preload-638389) Calling .Close
	I0907 00:44:39.683005  163749 main.go:141] libmachine: Making call to close driver server
	I0907 00:44:39.683017  163749 main.go:141] libmachine: (test-preload-638389) Calling .Close
	I0907 00:44:39.683312  163749 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:44:39.683334  163749 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:44:39.683344  163749 main.go:141] libmachine: Making call to close driver server
	I0907 00:44:39.683358  163749 main.go:141] libmachine: (test-preload-638389) Calling .Close
	I0907 00:44:39.683421  163749 main.go:141] libmachine: (test-preload-638389) DBG | Closing plugin on server side
	I0907 00:44:39.683425  163749 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:44:39.683457  163749 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:44:39.683467  163749 main.go:141] libmachine: Making call to close driver server
	I0907 00:44:39.683475  163749 main.go:141] libmachine: (test-preload-638389) Calling .Close
	I0907 00:44:39.683592  163749 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:44:39.683648  163749 main.go:141] libmachine: (test-preload-638389) DBG | Closing plugin on server side
	I0907 00:44:39.683678  163749 main.go:141] libmachine: (test-preload-638389) DBG | Closing plugin on server side
	I0907 00:44:39.683746  163749 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:44:39.683792  163749 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:44:39.683845  163749 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:44:39.692171  163749 main.go:141] libmachine: Making call to close driver server
	I0907 00:44:39.692197  163749 main.go:141] libmachine: (test-preload-638389) Calling .Close
	I0907 00:44:39.692588  163749 main.go:141] libmachine: Successfully made call to close driver server
	I0907 00:44:39.692611  163749 main.go:141] libmachine: Making call to close connection to plugin binary
	I0907 00:44:39.695365  163749 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0907 00:44:39.696524  163749 addons.go:514] duration metric: took 1.179029214s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0907 00:44:39.696582  163749 start.go:246] waiting for cluster config update ...
	I0907 00:44:39.696600  163749 start.go:255] writing updated cluster config ...
	I0907 00:44:39.696969  163749 ssh_runner.go:195] Run: rm -f paused
	I0907 00:44:39.704401  163749 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0907 00:44:39.705071  163749 kapi.go:59] client config for test-preload-638389: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/client.crt", KeyFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/profiles/test-preload-638389/client.key", CAFile:"/home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0907 00:44:39.708907  163749 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-7bjbj" in "kube-system" namespace to be "Ready" or be gone ...
	W0907 00:44:41.716452  163749 pod_ready.go:104] pod "coredns-668d6bf9bc-7bjbj" is not "Ready", error: <nil>
	W0907 00:44:44.214977  163749 pod_ready.go:104] pod "coredns-668d6bf9bc-7bjbj" is not "Ready", error: <nil>
	W0907 00:44:46.715164  163749 pod_ready.go:104] pod "coredns-668d6bf9bc-7bjbj" is not "Ready", error: <nil>
	I0907 00:44:47.717563  163749 pod_ready.go:94] pod "coredns-668d6bf9bc-7bjbj" is "Ready"
	I0907 00:44:47.717603  163749 pod_ready.go:86] duration metric: took 8.008661046s for pod "coredns-668d6bf9bc-7bjbj" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:47.722020  163749 pod_ready.go:83] waiting for pod "etcd-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	W0907 00:44:49.727158  163749 pod_ready.go:104] pod "etcd-test-preload-638389" is not "Ready", error: <nil>
	I0907 00:44:51.227460  163749 pod_ready.go:94] pod "etcd-test-preload-638389" is "Ready"
	I0907 00:44:51.227492  163749 pod_ready.go:86] duration metric: took 3.505446261s for pod "etcd-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:51.229878  163749 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:51.234144  163749 pod_ready.go:94] pod "kube-apiserver-test-preload-638389" is "Ready"
	I0907 00:44:51.234168  163749 pod_ready.go:86] duration metric: took 4.26327ms for pod "kube-apiserver-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:51.235972  163749 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:51.240612  163749 pod_ready.go:94] pod "kube-controller-manager-test-preload-638389" is "Ready"
	I0907 00:44:51.240637  163749 pod_ready.go:86] duration metric: took 4.646857ms for pod "kube-controller-manager-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:51.242705  163749 pod_ready.go:83] waiting for pod "kube-proxy-6ptrt" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:51.426890  163749 pod_ready.go:94] pod "kube-proxy-6ptrt" is "Ready"
	I0907 00:44:51.426929  163749 pod_ready.go:86] duration metric: took 184.190078ms for pod "kube-proxy-6ptrt" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:51.626256  163749 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:52.024833  163749 pod_ready.go:94] pod "kube-scheduler-test-preload-638389" is "Ready"
	I0907 00:44:52.024868  163749 pod_ready.go:86] duration metric: took 398.58075ms for pod "kube-scheduler-test-preload-638389" in "kube-system" namespace to be "Ready" or be gone ...
	I0907 00:44:52.024885  163749 pod_ready.go:40] duration metric: took 12.320442617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0907 00:44:52.068810  163749 start.go:617] kubectl: 1.33.2, cluster: 1.32.0 (minor skew: 1)
	I0907 00:44:52.071048  163749 out.go:179] * Done! kubectl is now configured to use "test-preload-638389" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.050584315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205893050557648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10d02f23-5e50-4c21-9278-50c7f54381c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.051297554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9869672e-e808-45f7-9f0e-ebdb39f47232 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.051376851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9869672e-e808-45f7-9f0e-ebdb39f47232 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.051544176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc3664bc505ae7a085e72d7423934c38139e19a6171e652459db7e023263dc2c,PodSandboxId:8c07f1eee08d551de3a97aadbbf87a737ebd981b099e82eda3be9ba6cda079c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757205881147938253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7bjbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f73fc0-0d27-41e8-9863-3d273e29460c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307c910fa4d221e31b3df2c2a16f8c8c0c3bc09ca4fcc368fc00c4aeaea80ed3,PodSandboxId:d3bfeb67061a542d95a318e8afac141a39cd43ae8f35706416452a27eb66a987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757205877645911045,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ptrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dda967b3-0529-4d89-97eb-a04429d40c18,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78d89b132a10b3e8addcadfeee94527f0dd28e86ef5be498a17daaf735cf23a,PodSandboxId:533496c920c72acd92e695b27e8205700be23537df0ac84d9cbd6da905ac6620,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757205877556513436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
95c75e-7db0-442e-8b85-7f89ab18b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24892b21dbff65cfd56c16948bb10c0a373f3f0276a0e2d73d9682f49125880f,PodSandboxId:eff1f57801cd0ac8bf447d87a7cfefb22ec4cce126fbd61f5e383bb41608628d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757205873207064150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb33af55b
2e152d2f1da0d1a096591ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae885c909b5ec6b15020a6d35cae9c0cf9055200b65ad113c385c4643e8c7909,PodSandboxId:5b33f4d3d1ce2eb4063380faac15192f577cf35bc80ad0963bfb2aa798c5c350,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757205873178109124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7d36b087ff9cf83e6ca98c704e5247,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1baa6ff3968573a97bf4114f5b3efc6c3406cebb515bd6a59bbd7d8847ec7255,PodSandboxId:a80347ea4f310822cb1a557a596a821eb7c7e8738c791c659e44e7aa7231c0fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757205873144988942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9c5edadea8f30a6f2969527b8e7b7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ac2209bc9d4a1f0c9dc95bbf1f7c2fb5e48a137fc537e705f10e02c0f15000,PodSandboxId:27f1f7ed4db777ec8897c720c120f41600a6ab72e0dfa203c94975106f8657f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757205873112139860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50201082074d861608a7ccecd9cb7a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9869672e-e808-45f7-9f0e-ebdb39f47232 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.101950312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45ad71fd-8049-4801-8dfd-a78c8a5aa6f0 name=/runtime.v1.RuntimeService/Version
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.102219221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45ad71fd-8049-4801-8dfd-a78c8a5aa6f0 name=/runtime.v1.RuntimeService/Version
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.103890636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fdb023d-6678-463d-a2dc-db523638e611 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.104464872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205893104440041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fdb023d-6678-463d-a2dc-db523638e611 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.105326175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a2b6a9f-a7d4-4c86-92aa-ee085410f9f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.105460637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a2b6a9f-a7d4-4c86-92aa-ee085410f9f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.105944131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc3664bc505ae7a085e72d7423934c38139e19a6171e652459db7e023263dc2c,PodSandboxId:8c07f1eee08d551de3a97aadbbf87a737ebd981b099e82eda3be9ba6cda079c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757205881147938253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7bjbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f73fc0-0d27-41e8-9863-3d273e29460c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307c910fa4d221e31b3df2c2a16f8c8c0c3bc09ca4fcc368fc00c4aeaea80ed3,PodSandboxId:d3bfeb67061a542d95a318e8afac141a39cd43ae8f35706416452a27eb66a987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757205877645911045,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ptrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dda967b3-0529-4d89-97eb-a04429d40c18,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78d89b132a10b3e8addcadfeee94527f0dd28e86ef5be498a17daaf735cf23a,PodSandboxId:533496c920c72acd92e695b27e8205700be23537df0ac84d9cbd6da905ac6620,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757205877556513436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
95c75e-7db0-442e-8b85-7f89ab18b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24892b21dbff65cfd56c16948bb10c0a373f3f0276a0e2d73d9682f49125880f,PodSandboxId:eff1f57801cd0ac8bf447d87a7cfefb22ec4cce126fbd61f5e383bb41608628d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757205873207064150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb33af55b
2e152d2f1da0d1a096591ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae885c909b5ec6b15020a6d35cae9c0cf9055200b65ad113c385c4643e8c7909,PodSandboxId:5b33f4d3d1ce2eb4063380faac15192f577cf35bc80ad0963bfb2aa798c5c350,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757205873178109124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7d36b087ff9cf83e6ca98c704e5247,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1baa6ff3968573a97bf4114f5b3efc6c3406cebb515bd6a59bbd7d8847ec7255,PodSandboxId:a80347ea4f310822cb1a557a596a821eb7c7e8738c791c659e44e7aa7231c0fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757205873144988942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9c5edadea8f30a6f2969527b8e7b7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ac2209bc9d4a1f0c9dc95bbf1f7c2fb5e48a137fc537e705f10e02c0f15000,PodSandboxId:27f1f7ed4db777ec8897c720c120f41600a6ab72e0dfa203c94975106f8657f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757205873112139860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50201082074d861608a7ccecd9cb7a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a2b6a9f-a7d4-4c86-92aa-ee085410f9f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.148990882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c14c970-e9c5-4397-bba3-d7db4f014f6d name=/runtime.v1.RuntimeService/Version
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.149090839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c14c970-e9c5-4397-bba3-d7db4f014f6d name=/runtime.v1.RuntimeService/Version
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.150219501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df9afae6-7975-4742-8aed-f1590b16ba49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.150701188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205893150678845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df9afae6-7975-4742-8aed-f1590b16ba49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.151382733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=448803b2-4e21-4d45-be8a-4308bbd32778 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.151452455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=448803b2-4e21-4d45-be8a-4308bbd32778 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.151759367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc3664bc505ae7a085e72d7423934c38139e19a6171e652459db7e023263dc2c,PodSandboxId:8c07f1eee08d551de3a97aadbbf87a737ebd981b099e82eda3be9ba6cda079c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757205881147938253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7bjbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f73fc0-0d27-41e8-9863-3d273e29460c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307c910fa4d221e31b3df2c2a16f8c8c0c3bc09ca4fcc368fc00c4aeaea80ed3,PodSandboxId:d3bfeb67061a542d95a318e8afac141a39cd43ae8f35706416452a27eb66a987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757205877645911045,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ptrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dda967b3-0529-4d89-97eb-a04429d40c18,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78d89b132a10b3e8addcadfeee94527f0dd28e86ef5be498a17daaf735cf23a,PodSandboxId:533496c920c72acd92e695b27e8205700be23537df0ac84d9cbd6da905ac6620,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757205877556513436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
95c75e-7db0-442e-8b85-7f89ab18b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24892b21dbff65cfd56c16948bb10c0a373f3f0276a0e2d73d9682f49125880f,PodSandboxId:eff1f57801cd0ac8bf447d87a7cfefb22ec4cce126fbd61f5e383bb41608628d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757205873207064150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb33af55b
2e152d2f1da0d1a096591ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae885c909b5ec6b15020a6d35cae9c0cf9055200b65ad113c385c4643e8c7909,PodSandboxId:5b33f4d3d1ce2eb4063380faac15192f577cf35bc80ad0963bfb2aa798c5c350,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757205873178109124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7d36b087ff9cf83e6ca98c704e5247,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1baa6ff3968573a97bf4114f5b3efc6c3406cebb515bd6a59bbd7d8847ec7255,PodSandboxId:a80347ea4f310822cb1a557a596a821eb7c7e8738c791c659e44e7aa7231c0fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757205873144988942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9c5edadea8f30a6f2969527b8e7b7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ac2209bc9d4a1f0c9dc95bbf1f7c2fb5e48a137fc537e705f10e02c0f15000,PodSandboxId:27f1f7ed4db777ec8897c720c120f41600a6ab72e0dfa203c94975106f8657f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757205873112139860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50201082074d861608a7ccecd9cb7a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=448803b2-4e21-4d45-be8a-4308bbd32778 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.189866189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2eb5f24c-e64e-4cd6-b227-9f5290aeee19 name=/runtime.v1.RuntimeService/Version
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.189975713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2eb5f24c-e64e-4cd6-b227-9f5290aeee19 name=/runtime.v1.RuntimeService/Version
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.191491078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36486cc0-077d-4761-896d-79974e24764a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.191992184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205893191966740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36486cc0-077d-4761-896d-79974e24764a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.192518747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29b68801-65c2-4ed5-9e07-a88b13d5ef84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.192573393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29b68801-65c2-4ed5-9e07-a88b13d5ef84 name=/runtime.v1.RuntimeService/ListContainers
	Sep 07 00:44:53 test-preload-638389 crio[841]: time="2025-09-07 00:44:53.192800264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc3664bc505ae7a085e72d7423934c38139e19a6171e652459db7e023263dc2c,PodSandboxId:8c07f1eee08d551de3a97aadbbf87a737ebd981b099e82eda3be9ba6cda079c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1757205881147938253,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-7bjbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41f73fc0-0d27-41e8-9863-3d273e29460c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307c910fa4d221e31b3df2c2a16f8c8c0c3bc09ca4fcc368fc00c4aeaea80ed3,PodSandboxId:d3bfeb67061a542d95a318e8afac141a39cd43ae8f35706416452a27eb66a987,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1757205877645911045,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ptrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: dda967b3-0529-4d89-97eb-a04429d40c18,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c78d89b132a10b3e8addcadfeee94527f0dd28e86ef5be498a17daaf735cf23a,PodSandboxId:533496c920c72acd92e695b27e8205700be23537df0ac84d9cbd6da905ac6620,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1757205877556513436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
95c75e-7db0-442e-8b85-7f89ab18b7e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24892b21dbff65cfd56c16948bb10c0a373f3f0276a0e2d73d9682f49125880f,PodSandboxId:eff1f57801cd0ac8bf447d87a7cfefb22ec4cce126fbd61f5e383bb41608628d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1757205873207064150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb33af55b
2e152d2f1da0d1a096591ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae885c909b5ec6b15020a6d35cae9c0cf9055200b65ad113c385c4643e8c7909,PodSandboxId:5b33f4d3d1ce2eb4063380faac15192f577cf35bc80ad0963bfb2aa798c5c350,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1757205873178109124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca7d36b087ff9cf83e6ca98c704e5247,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1baa6ff3968573a97bf4114f5b3efc6c3406cebb515bd6a59bbd7d8847ec7255,PodSandboxId:a80347ea4f310822cb1a557a596a821eb7c7e8738c791c659e44e7aa7231c0fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1757205873144988942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9c5edadea8f30a6f2969527b8e7b7b,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ac2209bc9d4a1f0c9dc95bbf1f7c2fb5e48a137fc537e705f10e02c0f15000,PodSandboxId:27f1f7ed4db777ec8897c720c120f41600a6ab72e0dfa203c94975106f8657f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1757205873112139860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-638389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50201082074d861608a7ccecd9cb7a,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29b68801-65c2-4ed5-9e07-a88b13d5ef84 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc3664bc505ae       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   8c07f1eee08d5       coredns-668d6bf9bc-7bjbj
	307c910fa4d22       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   d3bfeb67061a5       kube-proxy-6ptrt
	c78d89b132a10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   533496c920c72       storage-provisioner
	24892b21dbff6       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   eff1f57801cd0       kube-scheduler-test-preload-638389
	ae885c909b5ec       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   5b33f4d3d1ce2       etcd-test-preload-638389
	1baa6ff396857       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   a80347ea4f310       kube-controller-manager-test-preload-638389
	d7ac2209bc9d4       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   27f1f7ed4db77       kube-apiserver-test-preload-638389
	
	
	==> coredns [dc3664bc505ae7a085e72d7423934c38139e19a6171e652459db7e023263dc2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59867 - 22280 "HINFO IN 7405626767093979157.677925092989310024. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.032147418s
	
	
	==> describe nodes <==
	Name:               test-preload-638389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-638389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=196d69ba373adb3ed4fbcc87dc5d81b7f1adbb1d
	                    minikube.k8s.io/name=test-preload-638389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_07T00_42_58_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Sep 2025 00:42:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-638389
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Sep 2025 00:44:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Sep 2025 00:44:38 +0000   Sun, 07 Sep 2025 00:42:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Sep 2025 00:44:38 +0000   Sun, 07 Sep 2025 00:42:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Sep 2025 00:44:38 +0000   Sun, 07 Sep 2025 00:42:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Sep 2025 00:44:38 +0000   Sun, 07 Sep 2025 00:44:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    test-preload-638389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b69eb8ec41349e1af1a13acd454bef4
	  System UUID:                9b69eb8e-c413-49e1-af1a-13acd454bef4
	  Boot ID:                    b96de317-d94c-436c-ba38-76901434557c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-7bjbj                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     111s
	  kube-system                 etcd-test-preload-638389                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         117s
	  kube-system                 kube-apiserver-test-preload-638389             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-test-preload-638389    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-6ptrt                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-test-preload-638389             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 109s               kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  116s               kubelet          Node test-preload-638389 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  116s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    116s               kubelet          Node test-preload-638389 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s               kubelet          Node test-preload-638389 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s               kubelet          Starting kubelet.
	  Normal   NodeReady                115s               kubelet          Node test-preload-638389 status is now: NodeReady
	  Normal   RegisteredNode           112s               node-controller  Node test-preload-638389 event: Registered Node test-preload-638389 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-638389 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-638389 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-638389 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                kubelet          Node test-preload-638389 has been rebooted, boot id: b96de317-d94c-436c-ba38-76901434557c
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-638389 event: Registered Node test-preload-638389 in Controller
	
	
	==> dmesg <==
	[Sep 7 00:44] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000053] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002845] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.038483] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084956] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.099881] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.541679] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.023674] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [ae885c909b5ec6b15020a6d35cae9c0cf9055200b65ad113c385c4643e8c7909] <==
	{"level":"info","ts":"2025-09-07T00:44:33.796466Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-09-07T00:44:33.796573Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-07T00:44:33.801760Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-07T00:44:33.801789Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-07T00:44:33.809512Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-07T00:44:33.812979Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"bbf1bb039b0d3451","initial-advertise-peer-urls":["https://192.168.39.172:2380"],"listen-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-07T00:44:33.814668Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-07T00:44:33.811073Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2025-09-07T00:44:33.817675Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2025-09-07T00:44:34.860726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-07T00:44:34.860831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-07T00:44:34.860862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgPreVoteResp from bbf1bb039b0d3451 at term 2"}
	{"level":"info","ts":"2025-09-07T00:44:34.860884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became candidate at term 3"}
	{"level":"info","ts":"2025-09-07T00:44:34.860913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2025-09-07T00:44:34.860933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became leader at term 3"}
	{"level":"info","ts":"2025-09-07T00:44:34.860952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2025-09-07T00:44:34.864292Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"bbf1bb039b0d3451","local-member-attributes":"{Name:test-preload-638389 ClientURLs:[https://192.168.39.172:2379]}","request-path":"/0/members/bbf1bb039b0d3451/attributes","cluster-id":"a5f5c7bb54d744d4","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-07T00:44:34.864455Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-07T00:44:34.865320Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-07T00:44:34.865953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-07T00:44:34.866698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-07T00:44:34.868687Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-07T00:44:34.868768Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-07T00:44:34.869051Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-07T00:44:34.869576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.172:2379"}
	
	
	==> kernel <==
	 00:44:53 up 0 min,  0 users,  load average: 0.86, 0.27, 0.09
	Linux test-preload-638389 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [d7ac2209bc9d4a1f0c9dc95bbf1f7c2fb5e48a137fc537e705f10e02c0f15000] <==
	I0907 00:44:36.499890       1 autoregister_controller.go:144] Starting autoregister controller
	I0907 00:44:36.499897       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0907 00:44:36.499903       1 cache.go:39] Caches are synced for autoregister controller
	I0907 00:44:36.514949       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0907 00:44:36.537350       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0907 00:44:36.537409       1 policy_source.go:240] refreshing policies
	I0907 00:44:36.549423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0907 00:44:36.550511       1 shared_informer.go:320] Caches are synced for configmaps
	I0907 00:44:36.550522       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0907 00:44:36.552684       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0907 00:44:36.552900       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0907 00:44:36.552928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0907 00:44:36.553012       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0907 00:44:36.559251       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0907 00:44:36.562688       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0907 00:44:36.596691       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0907 00:44:37.094203       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0907 00:44:37.359134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0907 00:44:38.328326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0907 00:44:38.365240       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0907 00:44:38.401494       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0907 00:44:38.416688       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0907 00:44:39.762779       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0907 00:44:40.028547       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0907 00:44:40.067911       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1baa6ff3968573a97bf4114f5b3efc6c3406cebb515bd6a59bbd7d8847ec7255] <==
	I0907 00:44:39.691225       1 shared_informer.go:320] Caches are synced for deployment
	I0907 00:44:39.691314       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0907 00:44:39.692092       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0907 00:44:39.693478       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0907 00:44:39.696456       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0907 00:44:39.708481       1 shared_informer.go:320] Caches are synced for taint
	I0907 00:44:39.708680       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0907 00:44:39.708760       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-638389"
	I0907 00:44:39.708818       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0907 00:44:39.708856       1 shared_informer.go:320] Caches are synced for endpoint
	I0907 00:44:39.708888       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0907 00:44:39.709077       1 shared_informer.go:320] Caches are synced for ephemeral
	I0907 00:44:39.709214       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0907 00:44:39.719686       1 shared_informer.go:320] Caches are synced for persistent volume
	I0907 00:44:39.721941       1 shared_informer.go:320] Caches are synced for disruption
	I0907 00:44:39.723147       1 shared_informer.go:320] Caches are synced for attach detach
	I0907 00:44:39.727763       1 shared_informer.go:320] Caches are synced for HPA
	I0907 00:44:39.729234       1 shared_informer.go:320] Caches are synced for garbage collector
	I0907 00:44:39.733562       1 shared_informer.go:320] Caches are synced for stateful set
	I0907 00:44:39.742963       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0907 00:44:40.042642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="333.086825ms"
	I0907 00:44:40.042766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.634µs"
	I0907 00:44:41.267079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.108µs"
	I0907 00:44:47.434060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.991513ms"
	I0907 00:44:47.434539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="204.206µs"
	
	
	==> kube-proxy [307c910fa4d221e31b3df2c2a16f8c8c0c3bc09ca4fcc368fc00c4aeaea80ed3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0907 00:44:37.889465       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0907 00:44:37.899275       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	E0907 00:44:37.899427       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0907 00:44:37.937388       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0907 00:44:37.937423       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0907 00:44:37.937443       1 server_linux.go:170] "Using iptables Proxier"
	I0907 00:44:37.940291       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0907 00:44:37.940730       1 server.go:497] "Version info" version="v1.32.0"
	I0907 00:44:37.940891       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:44:37.942648       1 config.go:199] "Starting service config controller"
	I0907 00:44:37.942728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0907 00:44:37.942764       1 config.go:105] "Starting endpoint slice config controller"
	I0907 00:44:37.942780       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0907 00:44:37.945255       1 config.go:329] "Starting node config controller"
	I0907 00:44:37.945342       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0907 00:44:38.043262       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0907 00:44:38.043325       1 shared_informer.go:320] Caches are synced for service config
	I0907 00:44:38.045552       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [24892b21dbff65cfd56c16948bb10c0a373f3f0276a0e2d73d9682f49125880f] <==
	I0907 00:44:34.146835       1 serving.go:386] Generated self-signed cert in-memory
	W0907 00:44:36.401951       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0907 00:44:36.401991       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0907 00:44:36.402001       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0907 00:44:36.402012       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0907 00:44:36.505134       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0907 00:44:36.505184       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0907 00:44:36.511019       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0907 00:44:36.511174       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0907 00:44:36.511208       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0907 00:44:36.511234       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0907 00:44:36.611425       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 07 00:44:36 test-preload-638389 kubelet[1166]: I0907 00:44:36.596814    1166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-638389"
	Sep 07 00:44:36 test-preload-638389 kubelet[1166]: E0907 00:44:36.611041    1166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-638389\" already exists" pod="kube-system/kube-apiserver-test-preload-638389"
	Sep 07 00:44:36 test-preload-638389 kubelet[1166]: I0907 00:44:36.625426    1166 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-638389"
	Sep 07 00:44:36 test-preload-638389 kubelet[1166]: I0907 00:44:36.625504    1166 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-638389"
	Sep 07 00:44:36 test-preload-638389 kubelet[1166]: I0907 00:44:36.625525    1166 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 07 00:44:36 test-preload-638389 kubelet[1166]: I0907 00:44:36.626773    1166 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 07 00:44:36 test-preload-638389 kubelet[1166]: I0907 00:44:36.628356    1166 setters.go:602] "Node became not ready" node="test-preload-638389" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-07T00:44:36Z","lastTransitionTime":"2025-09-07T00:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: I0907 00:44:37.025675    1166 apiserver.go:52] "Watching apiserver"
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: E0907 00:44:37.030325    1166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-7bjbj" podUID="41f73fc0-0d27-41e8-9863-3d273e29460c"
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: I0907 00:44:37.049161    1166 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: I0907 00:44:37.086893    1166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6f95c75e-7db0-442e-8b85-7f89ab18b7e6-tmp\") pod \"storage-provisioner\" (UID: \"6f95c75e-7db0-442e-8b85-7f89ab18b7e6\") " pod="kube-system/storage-provisioner"
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: I0907 00:44:37.088051    1166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda967b3-0529-4d89-97eb-a04429d40c18-xtables-lock\") pod \"kube-proxy-6ptrt\" (UID: \"dda967b3-0529-4d89-97eb-a04429d40c18\") " pod="kube-system/kube-proxy-6ptrt"
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: I0907 00:44:37.088237    1166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda967b3-0529-4d89-97eb-a04429d40c18-lib-modules\") pod \"kube-proxy-6ptrt\" (UID: \"dda967b3-0529-4d89-97eb-a04429d40c18\") " pod="kube-system/kube-proxy-6ptrt"
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: E0907 00:44:37.088701    1166 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: E0907 00:44:37.088776    1166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41f73fc0-0d27-41e8-9863-3d273e29460c-config-volume podName:41f73fc0-0d27-41e8-9863-3d273e29460c nodeName:}" failed. No retries permitted until 2025-09-07 00:44:37.588752239 +0000 UTC m=+5.672215467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/41f73fc0-0d27-41e8-9863-3d273e29460c-config-volume") pod "coredns-668d6bf9bc-7bjbj" (UID: "41f73fc0-0d27-41e8-9863-3d273e29460c") : object "kube-system"/"coredns" not registered
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: E0907 00:44:37.591885    1166 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 07 00:44:37 test-preload-638389 kubelet[1166]: E0907 00:44:37.591970    1166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41f73fc0-0d27-41e8-9863-3d273e29460c-config-volume podName:41f73fc0-0d27-41e8-9863-3d273e29460c nodeName:}" failed. No retries permitted until 2025-09-07 00:44:38.59195589 +0000 UTC m=+6.675419120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/41f73fc0-0d27-41e8-9863-3d273e29460c-config-volume") pod "coredns-668d6bf9bc-7bjbj" (UID: "41f73fc0-0d27-41e8-9863-3d273e29460c") : object "kube-system"/"coredns" not registered
	Sep 07 00:44:38 test-preload-638389 kubelet[1166]: I0907 00:44:38.438715    1166 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Sep 07 00:44:38 test-preload-638389 kubelet[1166]: E0907 00:44:38.599551    1166 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 07 00:44:38 test-preload-638389 kubelet[1166]: E0907 00:44:38.599677    1166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41f73fc0-0d27-41e8-9863-3d273e29460c-config-volume podName:41f73fc0-0d27-41e8-9863-3d273e29460c nodeName:}" failed. No retries permitted until 2025-09-07 00:44:40.599663326 +0000 UTC m=+8.683126556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/41f73fc0-0d27-41e8-9863-3d273e29460c-config-volume") pod "coredns-668d6bf9bc-7bjbj" (UID: "41f73fc0-0d27-41e8-9863-3d273e29460c") : object "kube-system"/"coredns" not registered
	Sep 07 00:44:42 test-preload-638389 kubelet[1166]: E0907 00:44:42.116890    1166 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205882116232711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 07 00:44:42 test-preload-638389 kubelet[1166]: E0907 00:44:42.116937    1166 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205882116232711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 07 00:44:47 test-preload-638389 kubelet[1166]: I0907 00:44:47.403161    1166 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 07 00:44:52 test-preload-638389 kubelet[1166]: E0907 00:44:52.118980    1166 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205892118503532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 07 00:44:52 test-preload-638389 kubelet[1166]: E0907 00:44:52.119006    1166 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1757205892118503532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c78d89b132a10b3e8addcadfeee94527f0dd28e86ef5be498a17daaf735cf23a] <==
	I0907 00:44:37.797539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-638389 -n test-preload-638389
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-638389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-638389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-638389
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-638389: (1.087464764s)
--- FAIL: TestPreload (175.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-257218 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-257218 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.707951437s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-257218] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-257218" primary control-plane node in "pause-257218" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-257218" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:53:46.089195  172686 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:53:46.089549  172686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:53:46.089565  172686 out.go:374] Setting ErrFile to fd 2...
	I0907 00:53:46.089573  172686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:53:46.089982  172686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:53:46.090761  172686 out.go:368] Setting JSON to false
	I0907 00:53:46.092086  172686 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5769,"bootTime":1757200657,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:53:46.092175  172686 start.go:140] virtualization: kvm guest
	I0907 00:53:46.094166  172686 out.go:179] * [pause-257218] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:53:46.095644  172686 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:53:46.095684  172686 notify.go:220] Checking for updates...
	I0907 00:53:46.098235  172686 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:53:46.099662  172686 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0907 00:53:46.100874  172686 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:53:46.102004  172686 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:53:46.103139  172686 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:53:46.104982  172686 config.go:182] Loaded profile config "pause-257218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:53:46.105606  172686 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:53:46.105690  172686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:53:46.127158  172686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0907 00:53:46.127729  172686 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:53:46.128378  172686 main.go:141] libmachine: Using API Version  1
	I0907 00:53:46.128406  172686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:53:46.129027  172686 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:53:46.129251  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:53:46.129573  172686 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:53:46.130075  172686 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:53:46.130127  172686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:53:46.151608  172686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I0907 00:53:46.152224  172686 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:53:46.152846  172686 main.go:141] libmachine: Using API Version  1
	I0907 00:53:46.152866  172686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:53:46.153295  172686 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:53:46.153482  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:53:46.207083  172686 out.go:179] * Using the kvm2 driver based on existing profile
	I0907 00:53:46.208343  172686 start.go:304] selected driver: kvm2
	I0907 00:53:46.208367  172686 start.go:918] validating driver "kvm2" against &{Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:53:46.208571  172686 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:53:46.209080  172686 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:53:46.209174  172686 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21132-128697/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:53:46.231607  172686 install.go:137] /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0907 00:53:46.232851  172686 cni.go:84] Creating CNI manager for ""
	I0907 00:53:46.232927  172686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:53:46.233021  172686 start.go:348] cluster config:
	{Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-257218 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:53:46.233244  172686 iso.go:125] acquiring lock: {Name:mk3bd5f7fbe7836651644a94b41f2b6111c9b69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:53:46.235122  172686 out.go:179] * Starting "pause-257218" primary control-plane node in "pause-257218" cluster
	I0907 00:53:46.236345  172686 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:53:46.236408  172686 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0907 00:53:46.236423  172686 cache.go:58] Caching tarball of preloaded images
	I0907 00:53:46.236544  172686 preload.go:172] Found /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:53:46.236555  172686 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0907 00:53:46.236730  172686 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/config.json ...
	I0907 00:53:46.237030  172686 start.go:360] acquireMachinesLock for pause-257218: {Name:mk3b58ef42f26d446b63d531f457f6ac8953e3f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:54:06.523407  172686 start.go:364] duration metric: took 20.286346801s to acquireMachinesLock for "pause-257218"
	I0907 00:54:06.523459  172686 start.go:96] Skipping create...Using existing machine configuration
	I0907 00:54:06.523466  172686 fix.go:54] fixHost starting: 
	I0907 00:54:06.523909  172686 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:54:06.523970  172686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:54:06.546004  172686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0907 00:54:06.546699  172686 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:54:06.547296  172686 main.go:141] libmachine: Using API Version  1
	I0907 00:54:06.547324  172686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:54:06.547636  172686 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:54:06.547799  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:54:06.547902  172686 main.go:141] libmachine: (pause-257218) Calling .GetState
	I0907 00:54:06.549795  172686 fix.go:112] recreateIfNeeded on pause-257218: state=Running err=<nil>
	W0907 00:54:06.549828  172686 fix.go:138] unexpected machine state, will restart: <nil>
	I0907 00:54:06.551870  172686 out.go:252] * Updating the running kvm2 "pause-257218" VM ...
	I0907 00:54:06.551911  172686 machine.go:93] provisionDockerMachine start ...
	I0907 00:54:06.551934  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:54:06.552272  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:06.555503  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.555967  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:06.555999  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.556186  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:06.556428  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:06.556591  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:06.556780  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:06.556957  172686 main.go:141] libmachine: Using SSH client type: native
	I0907 00:54:06.557332  172686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0907 00:54:06.557346  172686 main.go:141] libmachine: About to run SSH command:
	hostname
	I0907 00:54:06.675134  172686 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-257218
	
	I0907 00:54:06.675197  172686 main.go:141] libmachine: (pause-257218) Calling .GetMachineName
	I0907 00:54:06.675492  172686 buildroot.go:166] provisioning hostname "pause-257218"
	I0907 00:54:06.675525  172686 main.go:141] libmachine: (pause-257218) Calling .GetMachineName
	I0907 00:54:06.675788  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:06.679300  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.679747  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:06.679777  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.680058  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:06.680271  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:06.680471  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:06.680637  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:06.680953  172686 main.go:141] libmachine: Using SSH client type: native
	I0907 00:54:06.681309  172686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0907 00:54:06.681333  172686 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-257218 && echo "pause-257218" | sudo tee /etc/hostname
	I0907 00:54:06.823445  172686 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-257218
	
	I0907 00:54:06.823482  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:06.827458  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.827913  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:06.827945  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.828224  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:06.828495  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:06.828705  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:06.828922  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:06.829160  172686 main.go:141] libmachine: Using SSH client type: native
	I0907 00:54:06.829368  172686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0907 00:54:06.829387  172686 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-257218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-257218/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-257218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0907 00:54:06.943432  172686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0907 00:54:06.943521  172686 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21132-128697/.minikube CaCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21132-128697/.minikube}
	I0907 00:54:06.943550  172686 buildroot.go:174] setting up certificates
	I0907 00:54:06.943562  172686 provision.go:84] configureAuth start
	I0907 00:54:06.943575  172686 main.go:141] libmachine: (pause-257218) Calling .GetMachineName
	I0907 00:54:06.943875  172686 main.go:141] libmachine: (pause-257218) Calling .GetIP
	I0907 00:54:06.947500  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.947988  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:06.948020  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.948245  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:06.951088  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.951493  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:06.951523  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:06.951704  172686 provision.go:143] copyHostCerts
	I0907 00:54:06.951770  172686 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-128697/.minikube/ca.pem, removing ...
	I0907 00:54:06.951793  172686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.pem
	I0907 00:54:06.951867  172686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/ca.pem (1082 bytes)
	I0907 00:54:06.952006  172686 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-128697/.minikube/cert.pem, removing ...
	I0907 00:54:06.952018  172686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-128697/.minikube/cert.pem
	I0907 00:54:06.952048  172686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/cert.pem (1123 bytes)
	I0907 00:54:06.952156  172686 exec_runner.go:144] found /home/jenkins/minikube-integration/21132-128697/.minikube/key.pem, removing ...
	I0907 00:54:06.952166  172686 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21132-128697/.minikube/key.pem
	I0907 00:54:06.952191  172686 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21132-128697/.minikube/key.pem (1679 bytes)
	I0907 00:54:06.952266  172686 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem org=jenkins.pause-257218 san=[127.0.0.1 192.168.61.18 localhost minikube pause-257218]
	I0907 00:54:07.110521  172686 provision.go:177] copyRemoteCerts
	I0907 00:54:07.110598  172686 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0907 00:54:07.110627  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:07.113985  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:07.114484  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:07.114517  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:07.114882  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:07.115160  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:07.115417  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:07.115605  172686 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/pause-257218/id_rsa Username:docker}
	I0907 00:54:07.215338  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0907 00:54:07.266502  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0907 00:54:07.314156  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0907 00:54:07.363648  172686 provision.go:87] duration metric: took 420.065941ms to configureAuth
	I0907 00:54:07.363691  172686 buildroot.go:189] setting minikube options for container-runtime
	I0907 00:54:07.364108  172686 config.go:182] Loaded profile config "pause-257218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:54:07.364244  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:07.367761  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:07.368283  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:07.368317  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:07.368829  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:07.369161  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:07.369373  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:07.369541  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:07.369740  172686 main.go:141] libmachine: Using SSH client type: native
	I0907 00:54:07.370118  172686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0907 00:54:07.370145  172686 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0907 00:54:13.132320  172686 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0907 00:54:13.132358  172686 machine.go:96] duration metric: took 6.580436799s to provisionDockerMachine
	I0907 00:54:13.132376  172686 start.go:293] postStartSetup for "pause-257218" (driver="kvm2")
	I0907 00:54:13.132389  172686 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0907 00:54:13.132414  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:54:13.132823  172686 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0907 00:54:13.132866  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:13.136597  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.137154  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:13.137188  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.137471  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:13.137719  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:13.137966  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:13.138171  172686 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/pause-257218/id_rsa Username:docker}
	I0907 00:54:13.234829  172686 ssh_runner.go:195] Run: cat /etc/os-release
	I0907 00:54:13.241960  172686 info.go:137] Remote host: Buildroot 2025.02
	I0907 00:54:13.242001  172686 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-128697/.minikube/addons for local assets ...
	I0907 00:54:13.242090  172686 filesync.go:126] Scanning /home/jenkins/minikube-integration/21132-128697/.minikube/files for local assets ...
	I0907 00:54:13.242237  172686 filesync.go:149] local asset: /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem -> 1330252.pem in /etc/ssl/certs
	I0907 00:54:13.242401  172686 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0907 00:54:13.260278  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem --> /etc/ssl/certs/1330252.pem (1708 bytes)
	I0907 00:54:13.302529  172686 start.go:296] duration metric: took 170.131174ms for postStartSetup
	I0907 00:54:13.302585  172686 fix.go:56] duration metric: took 6.77911828s for fixHost
	I0907 00:54:13.302616  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:13.306452  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.306856  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:13.306889  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.307021  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:13.307265  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:13.307524  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:13.307700  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:13.307892  172686 main.go:141] libmachine: Using SSH client type: native
	I0907 00:54:13.308195  172686 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0907 00:54:13.308216  172686 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0907 00:54:13.427952  172686 main.go:141] libmachine: SSH cmd err, output: <nil>: 1757206453.423794256
	
	I0907 00:54:13.427984  172686 fix.go:216] guest clock: 1757206453.423794256
	I0907 00:54:13.427992  172686 fix.go:229] Guest: 2025-09-07 00:54:13.423794256 +0000 UTC Remote: 2025-09-07 00:54:13.302592176 +0000 UTC m=+27.271295865 (delta=121.20208ms)
	I0907 00:54:13.428014  172686 fix.go:200] guest clock delta is within tolerance: 121.20208ms
	I0907 00:54:13.428019  172686 start.go:83] releasing machines lock for "pause-257218", held for 6.90458516s
	I0907 00:54:13.428040  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:54:13.428410  172686 main.go:141] libmachine: (pause-257218) Calling .GetIP
	I0907 00:54:13.432003  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.432455  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:13.432487  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.432679  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:54:13.433484  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:54:13.433709  172686 main.go:141] libmachine: (pause-257218) Calling .DriverName
	I0907 00:54:13.433836  172686 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0907 00:54:13.433898  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:13.434231  172686 ssh_runner.go:195] Run: cat /version.json
	I0907 00:54:13.434262  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHHostname
	I0907 00:54:13.437422  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.437829  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.437865  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:13.437879  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.438081  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:13.438280  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:13.438384  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:13.438419  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:13.438427  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:13.438625  172686 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/pause-257218/id_rsa Username:docker}
	I0907 00:54:13.438652  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHPort
	I0907 00:54:13.438793  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHKeyPath
	I0907 00:54:13.438938  172686 main.go:141] libmachine: (pause-257218) Calling .GetSSHUsername
	I0907 00:54:13.439052  172686 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/pause-257218/id_rsa Username:docker}
	I0907 00:54:13.530038  172686 ssh_runner.go:195] Run: systemctl --version
	I0907 00:54:13.559323  172686 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0907 00:54:13.733286  172686 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0907 00:54:13.743495  172686 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0907 00:54:13.743590  172686 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0907 00:54:13.757311  172686 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0907 00:54:13.757361  172686 start.go:495] detecting cgroup driver to use...
	I0907 00:54:13.757447  172686 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0907 00:54:13.780780  172686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0907 00:54:13.806875  172686 docker.go:218] disabling cri-docker service (if available) ...
	I0907 00:54:13.806969  172686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0907 00:54:13.831909  172686 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0907 00:54:13.852876  172686 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0907 00:54:14.071380  172686 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0907 00:54:14.313845  172686 docker.go:234] disabling docker service ...
	I0907 00:54:14.313931  172686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0907 00:54:14.362877  172686 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0907 00:54:14.384832  172686 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0907 00:54:14.618367  172686 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0907 00:54:14.837918  172686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0907 00:54:14.861931  172686 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0907 00:54:14.891597  172686 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0907 00:54:14.891672  172686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:54:14.910000  172686 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0907 00:54:14.910161  172686 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:54:14.930179  172686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:54:14.947496  172686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:54:14.964875  172686 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0907 00:54:14.983581  172686 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:54:15.002824  172686 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:54:15.020988  172686 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0907 00:54:15.040737  172686 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0907 00:54:15.056249  172686 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0907 00:54:15.075587  172686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:54:15.493412  172686 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0907 00:54:21.192874  172686 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.699411667s)
	I0907 00:54:21.192913  172686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:54:21.192967  172686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:54:21.199990  172686 start.go:563] Will wait 60s for crictl version
	I0907 00:54:21.200089  172686 ssh_runner.go:195] Run: which crictl
	I0907 00:54:21.205519  172686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:54:21.262617  172686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0907 00:54:21.262716  172686 ssh_runner.go:195] Run: crio --version
	I0907 00:54:21.313114  172686 ssh_runner.go:195] Run: crio --version
	I0907 00:54:21.362008  172686 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0907 00:54:21.363492  172686 main.go:141] libmachine: (pause-257218) Calling .GetIP
	I0907 00:54:21.366727  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:21.367266  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:21.367293  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:21.367562  172686 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:54:21.374396  172686 kubeadm.go:875] updating cluster {Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0907 00:54:21.374602  172686 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:54:21.374671  172686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:54:21.429781  172686 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:54:21.429807  172686 crio.go:433] Images already preloaded, skipping extraction
	I0907 00:54:21.429860  172686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:54:21.472699  172686 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:54:21.472729  172686 cache_images.go:85] Images are preloaded, skipping loading
	I0907 00:54:21.472739  172686 kubeadm.go:926] updating node { 192.168.61.18 8443 v1.34.0 crio true true} ...
	I0907 00:54:21.472882  172686 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-257218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0907 00:54:21.472965  172686 ssh_runner.go:195] Run: crio config
	I0907 00:54:21.533820  172686 cni.go:84] Creating CNI manager for ""
	I0907 00:54:21.533853  172686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:54:21.533868  172686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0907 00:54:21.533904  172686 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.18 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-257218 NodeName:pause-257218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:54:21.534080  172686 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-257218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:54:21.534168  172686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0907 00:54:21.550280  172686 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:54:21.550360  172686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:54:21.564776  172686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0907 00:54:21.589409  172686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:54:21.613579  172686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0907 00:54:21.637425  172686 ssh_runner.go:195] Run: grep 192.168.61.18	control-plane.minikube.internal$ /etc/hosts
	I0907 00:54:21.642426  172686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:54:21.817118  172686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:54:21.842175  172686 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218 for IP: 192.168.61.18
	I0907 00:54:21.842225  172686 certs.go:194] generating shared ca certs ...
	I0907 00:54:21.842249  172686 certs.go:226] acquiring lock for ca certs: {Name:mk640ab940eb4d822d1f15a5cd2b466b6472cad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:54:21.842471  172686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key
	I0907 00:54:21.842540  172686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key
	I0907 00:54:21.842555  172686 certs.go:256] generating profile certs ...
	I0907 00:54:21.842698  172686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/client.key
	I0907 00:54:21.842794  172686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.key.8978653a
	I0907 00:54:21.842864  172686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.key
	I0907 00:54:21.843034  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem (1338 bytes)
	W0907 00:54:21.843082  172686 certs.go:480] ignoring /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025_empty.pem, impossibly tiny 0 bytes
	I0907 00:54:21.843094  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:54:21.843127  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:54:21.843180  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:54:21.843222  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem (1679 bytes)
	I0907 00:54:21.843324  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem (1708 bytes)
	I0907 00:54:21.844305  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:54:21.884263  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:54:21.920899  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:54:21.959818  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:54:22.000573  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0907 00:54:22.042214  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:54:22.087526  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:54:22.127082  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0907 00:54:22.167626  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem --> /usr/share/ca-certificates/1330252.pem (1708 bytes)
	I0907 00:54:22.219441  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:54:22.268395  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem --> /usr/share/ca-certificates/133025.pem (1338 bytes)
	I0907 00:54:22.313752  172686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:54:22.346055  172686 ssh_runner.go:195] Run: openssl version
	I0907 00:54:22.356503  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1330252.pem && ln -fs /usr/share/ca-certificates/1330252.pem /etc/ssl/certs/1330252.pem"
	I0907 00:54:22.377790  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.385453  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:55 /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.385541  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.395095  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1330252.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:54:22.410292  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:54:22.430715  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.437507  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.437584  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.447530  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:54:22.463566  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/133025.pem && ln -fs /usr/share/ca-certificates/133025.pem /etc/ssl/certs/133025.pem"
	I0907 00:54:22.481035  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.488175  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:55 /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.488256  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.497643  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/133025.pem /etc/ssl/certs/51391683.0"
	I0907 00:54:22.517251  172686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0907 00:54:22.524869  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:54:22.534012  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:54:22.543613  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:54:22.553590  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:54:22.563379  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:54:22.573809  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:54:22.584248  172686 kubeadm.go:392] StartCluster: {Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:54:22.584421  172686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:54:22.584498  172686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:54:22.638417  172686 cri.go:89] found id: "0b0a42f2ca65e81c0f411c65c92440137ca07a2407ef0eb3e691b62c5d12b66f"
	I0907 00:54:22.638449  172686 cri.go:89] found id: "966fa4a120f98bd7a4a6478e2e1cec4e5d450ead219f4ff7dd12a392c8a76d90"
	I0907 00:54:22.638454  172686 cri.go:89] found id: "a747c18ebfeb177c131c243490f8f6d6402c46563aed80e9ba358c33e76813a9"
	I0907 00:54:22.638458  172686 cri.go:89] found id: "4fcd08ae154de04c096b0c062d36483fc0747c79c830b6e25fa8c194e05f527e"
	I0907 00:54:22.638463  172686 cri.go:89] found id: "fb07fafd721f657571f03b4a461ca2752eda5ec57fa006ac6fa8cc7221c98208"
	I0907 00:54:22.638468  172686 cri.go:89] found id: "4c0797ea3b965c72d4f8babfd39a3fde2c0284ff4749f7c493fb694704ba16d3"
	I0907 00:54:22.638472  172686 cri.go:89] found id: ""
	I0907 00:54:22.638530  172686 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-257218 -n pause-257218
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-257218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-257218 logs -n 25: (1.79470753s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-513546 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ start   │ -p cert-options-794643 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │ 07 Sep 25 00:53 UTC │
	│ ssh     │ -p cilium-513546 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo containerd config dump                                                                                                                                                                                                │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo crio config                                                                                                                                                                                                           │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ delete  │ -p cilium-513546                                                                                                                                                                                                                            │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │ 07 Sep 25 00:52 UTC │
	│ start   │ -p old-k8s-version-477870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477870 │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ start   │ -p running-upgrade-239150 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                      │ running-upgrade-239150 │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │ 07 Sep 25 00:54 UTC │
	│ ssh     │ cert-options-794643 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:53 UTC │
	│ ssh     │ -p cert-options-794643 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:53 UTC │
	│ delete  │ -p cert-options-794643                                                                                                                                                                                                                      │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:53 UTC │
	│ start   │ -p no-preload-752207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-752207      │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │                     │
	│ start   │ -p pause-257218 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-257218           │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:54 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-239150 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-239150 │ jenkins │ v1.36.0 │ 07 Sep 25 00:54 UTC │                     │
	│ delete  │ -p running-upgrade-239150                                                                                                                                                                                                                   │ running-upgrade-239150 │ jenkins │ v1.36.0 │ 07 Sep 25 00:54 UTC │ 07 Sep 25 00:54 UTC │
	│ start   │ -p embed-certs-631721 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-631721     │ jenkins │ v1.36.0 │ 07 Sep 25 00:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:54:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:54:20.136973  173060 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:54:20.137308  173060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:54:20.137320  173060 out.go:374] Setting ErrFile to fd 2...
	I0907 00:54:20.137325  173060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:54:20.137572  173060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:54:20.138250  173060 out.go:368] Setting JSON to false
	I0907 00:54:20.139226  173060 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5803,"bootTime":1757200657,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:54:20.139341  173060 start.go:140] virtualization: kvm guest
	I0907 00:54:20.141482  173060 out.go:179] * [embed-certs-631721] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:54:20.143142  173060 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:54:20.143169  173060 notify.go:220] Checking for updates...
	I0907 00:54:20.145717  173060 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:54:20.146978  173060 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0907 00:54:20.148190  173060 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:54:20.149514  173060 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:54:20.150901  173060 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:54:20.152580  173060 config.go:182] Loaded profile config "no-preload-752207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:54:20.152736  173060 config.go:182] Loaded profile config "old-k8s-version-477870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0907 00:54:20.152950  173060 config.go:182] Loaded profile config "pause-257218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:54:20.153087  173060 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:54:20.199291  173060 out.go:179] * Using the kvm2 driver based on user configuration
	I0907 00:54:20.200650  173060 start.go:304] selected driver: kvm2
	I0907 00:54:20.200665  173060 start.go:918] validating driver "kvm2" against <nil>
	I0907 00:54:20.200691  173060 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:54:20.201439  173060 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:54:20.201525  173060 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21132-128697/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:54:20.218923  173060 install.go:137] /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0907 00:54:20.218993  173060 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0907 00:54:20.219362  173060 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:54:20.219409  173060 cni.go:84] Creating CNI manager for ""
	I0907 00:54:20.219472  173060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:54:20.219483  173060 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0907 00:54:20.219558  173060 start.go:348] cluster config:
	{Name:embed-certs-631721 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-631721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:54:20.219688  173060 iso.go:125] acquiring lock: {Name:mk3bd5f7fbe7836651644a94b41f2b6111c9b69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:54:20.221825  173060 out.go:179] * Starting "embed-certs-631721" primary control-plane node in "embed-certs-631721" cluster
	I0907 00:54:20.223219  173060 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:54:20.223276  173060 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0907 00:54:20.223286  173060 cache.go:58] Caching tarball of preloaded images
	I0907 00:54:20.223382  173060 preload.go:172] Found /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:54:20.223398  173060 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0907 00:54:20.223513  173060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/embed-certs-631721/config.json ...
	I0907 00:54:20.223539  173060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/embed-certs-631721/config.json: {Name:mkbf0015122395e862c1a391b190dc0b3b70920f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:54:20.223709  173060 start.go:360] acquireMachinesLock for embed-certs-631721: {Name:mk3b58ef42f26d446b63d531f457f6ac8953e3f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:54:20.223758  173060 start.go:364] duration metric: took 31.787µs to acquireMachinesLock for "embed-certs-631721"
	I0907 00:54:20.223782  173060 start.go:93] Provisioning new machine with config: &{Name:embed-certs-631721 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.0 ClusterName:embed-certs-631721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:54:20.223854  173060 start.go:125] createHost starting for "" (driver="kvm2")
	W0907 00:54:17.318958  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	W0907 00:54:19.808686  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	I0907 00:54:17.223487  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (3.82491204s)
	I0907 00:54:17.223528  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I0907 00:54:17.223577  172493 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0907 00:54:17.223639  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0907 00:54:19.399112  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0: (2.175429854s)
	I0907 00:54:19.399152  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 from cache
	I0907 00:54:19.399186  172493 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0907 00:54:19.399246  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0907 00:54:21.192874  172686 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.699411667s)
	I0907 00:54:21.192913  172686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:54:21.192967  172686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:54:21.199990  172686 start.go:563] Will wait 60s for crictl version
	I0907 00:54:21.200089  172686 ssh_runner.go:195] Run: which crictl
	I0907 00:54:21.205519  172686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:54:21.262617  172686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0907 00:54:21.262716  172686 ssh_runner.go:195] Run: crio --version
	I0907 00:54:21.313114  172686 ssh_runner.go:195] Run: crio --version
	I0907 00:54:21.362008  172686 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0907 00:54:20.225733  173060 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0907 00:54:20.225911  173060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:54:20.225974  173060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:54:20.241826  173060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0907 00:54:20.242319  173060 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:54:20.242890  173060 main.go:141] libmachine: Using API Version  1
	I0907 00:54:20.242918  173060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:54:20.243295  173060 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:54:20.243534  173060 main.go:141] libmachine: (embed-certs-631721) Calling .GetMachineName
	I0907 00:54:20.243714  173060 main.go:141] libmachine: (embed-certs-631721) Calling .DriverName
	I0907 00:54:20.243853  173060 start.go:159] libmachine.API.Create for "embed-certs-631721" (driver="kvm2")
	I0907 00:54:20.243882  173060 client.go:168] LocalClient.Create starting
	I0907 00:54:20.243920  173060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem
	I0907 00:54:20.243966  173060 main.go:141] libmachine: Decoding PEM data...
	I0907 00:54:20.243991  173060 main.go:141] libmachine: Parsing certificate...
	I0907 00:54:20.244069  173060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem
	I0907 00:54:20.244098  173060 main.go:141] libmachine: Decoding PEM data...
	I0907 00:54:20.244117  173060 main.go:141] libmachine: Parsing certificate...
	I0907 00:54:20.244139  173060 main.go:141] libmachine: Running pre-create checks...
	I0907 00:54:20.244156  173060 main.go:141] libmachine: (embed-certs-631721) Calling .PreCreateCheck
	I0907 00:54:20.244513  173060 main.go:141] libmachine: (embed-certs-631721) Calling .GetConfigRaw
	I0907 00:54:20.244972  173060 main.go:141] libmachine: Creating machine...
	I0907 00:54:20.244992  173060 main.go:141] libmachine: (embed-certs-631721) Calling .Create
	I0907 00:54:20.245131  173060 main.go:141] libmachine: (embed-certs-631721) creating KVM machine...
	I0907 00:54:20.245152  173060 main.go:141] libmachine: (embed-certs-631721) creating network...
	I0907 00:54:20.246432  173060 main.go:141] libmachine: (embed-certs-631721) DBG | found existing default KVM network
	I0907 00:54:20.247680  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:20.247545  173082 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123b10}
	I0907 00:54:20.247751  173060 main.go:141] libmachine: (embed-certs-631721) DBG | created network xml: 
	I0907 00:54:20.247776  173060 main.go:141] libmachine: (embed-certs-631721) DBG | <network>
	I0907 00:54:20.247786  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   <name>mk-embed-certs-631721</name>
	I0907 00:54:20.247796  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   <dns enable='no'/>
	I0907 00:54:20.247804  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   
	I0907 00:54:20.247818  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0907 00:54:20.247827  173060 main.go:141] libmachine: (embed-certs-631721) DBG |     <dhcp>
	I0907 00:54:20.247847  173060 main.go:141] libmachine: (embed-certs-631721) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0907 00:54:20.247868  173060 main.go:141] libmachine: (embed-certs-631721) DBG |     </dhcp>
	I0907 00:54:20.247875  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   </ip>
	I0907 00:54:20.247882  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   
	I0907 00:54:20.247893  173060 main.go:141] libmachine: (embed-certs-631721) DBG | </network>
	I0907 00:54:20.247903  173060 main.go:141] libmachine: (embed-certs-631721) DBG | 
	I0907 00:54:20.253959  173060 main.go:141] libmachine: (embed-certs-631721) DBG | trying to create private KVM network mk-embed-certs-631721 192.168.39.0/24...
	I0907 00:54:20.344363  173060 main.go:141] libmachine: (embed-certs-631721) DBG | private KVM network mk-embed-certs-631721 192.168.39.0/24 created
	I0907 00:54:20.344421  173060 main.go:141] libmachine: (embed-certs-631721) setting up store path in /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721 ...
	I0907 00:54:20.344437  173060 main.go:141] libmachine: (embed-certs-631721) building disk image from file:///home/jenkins/minikube-integration/21132-128697/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0907 00:54:20.344453  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:20.344329  173082 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:54:20.344733  173060 main.go:141] libmachine: (embed-certs-631721) Downloading /home/jenkins/minikube-integration/21132-128697/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21132-128697/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0907 00:54:20.682106  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:20.681922  173082 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/id_rsa...
	I0907 00:54:21.110014  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.109848  173082 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/embed-certs-631721.rawdisk...
	I0907 00:54:21.110051  173060 main.go:141] libmachine: (embed-certs-631721) DBG | Writing magic tar header
	I0907 00:54:21.110074  173060 main.go:141] libmachine: (embed-certs-631721) DBG | Writing SSH key tar header
	I0907 00:54:21.110086  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.110015  173082 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721 ...
	I0907 00:54:21.110731  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721
	I0907 00:54:21.110761  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube/machines
	I0907 00:54:21.110776  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721 (perms=drwx------)
	I0907 00:54:21.110786  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:54:21.110808  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697
	I0907 00:54:21.110818  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0907 00:54:21.110827  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube/machines (perms=drwxr-xr-x)
	I0907 00:54:21.110870  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube (perms=drwxr-xr-x)
	I0907 00:54:21.110886  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697 (perms=drwxrwxr-x)
	I0907 00:54:21.110895  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins
	I0907 00:54:21.110905  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home
	I0907 00:54:21.110917  173060 main.go:141] libmachine: (embed-certs-631721) DBG | skipping /home - not owner
	I0907 00:54:21.110930  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 00:54:21.110950  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 00:54:21.110961  173060 main.go:141] libmachine: (embed-certs-631721) creating domain...
	I0907 00:54:21.112499  173060 main.go:141] libmachine: (embed-certs-631721) define libvirt domain using xml: 
	I0907 00:54:21.112534  173060 main.go:141] libmachine: (embed-certs-631721) <domain type='kvm'>
	I0907 00:54:21.112550  173060 main.go:141] libmachine: (embed-certs-631721)   <name>embed-certs-631721</name>
	I0907 00:54:21.112561  173060 main.go:141] libmachine: (embed-certs-631721)   <memory unit='MiB'>3072</memory>
	I0907 00:54:21.112569  173060 main.go:141] libmachine: (embed-certs-631721)   <vcpu>2</vcpu>
	I0907 00:54:21.112576  173060 main.go:141] libmachine: (embed-certs-631721)   <features>
	I0907 00:54:21.112591  173060 main.go:141] libmachine: (embed-certs-631721)     <acpi/>
	I0907 00:54:21.112601  173060 main.go:141] libmachine: (embed-certs-631721)     <apic/>
	I0907 00:54:21.112621  173060 main.go:141] libmachine: (embed-certs-631721)     <pae/>
	I0907 00:54:21.112638  173060 main.go:141] libmachine: (embed-certs-631721)     
	I0907 00:54:21.112650  173060 main.go:141] libmachine: (embed-certs-631721)   </features>
	I0907 00:54:21.112657  173060 main.go:141] libmachine: (embed-certs-631721)   <cpu mode='host-passthrough'>
	I0907 00:54:21.112665  173060 main.go:141] libmachine: (embed-certs-631721)   
	I0907 00:54:21.112677  173060 main.go:141] libmachine: (embed-certs-631721)   </cpu>
	I0907 00:54:21.112685  173060 main.go:141] libmachine: (embed-certs-631721)   <os>
	I0907 00:54:21.112695  173060 main.go:141] libmachine: (embed-certs-631721)     <type>hvm</type>
	I0907 00:54:21.112704  173060 main.go:141] libmachine: (embed-certs-631721)     <boot dev='cdrom'/>
	I0907 00:54:21.112714  173060 main.go:141] libmachine: (embed-certs-631721)     <boot dev='hd'/>
	I0907 00:54:21.112723  173060 main.go:141] libmachine: (embed-certs-631721)     <bootmenu enable='no'/>
	I0907 00:54:21.112732  173060 main.go:141] libmachine: (embed-certs-631721)   </os>
	I0907 00:54:21.112761  173060 main.go:141] libmachine: (embed-certs-631721)   <devices>
	I0907 00:54:21.112777  173060 main.go:141] libmachine: (embed-certs-631721)     <disk type='file' device='cdrom'>
	I0907 00:54:21.112795  173060 main.go:141] libmachine: (embed-certs-631721)       <source file='/home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/boot2docker.iso'/>
	I0907 00:54:21.112807  173060 main.go:141] libmachine: (embed-certs-631721)       <target dev='hdc' bus='scsi'/>
	I0907 00:54:21.112816  173060 main.go:141] libmachine: (embed-certs-631721)       <readonly/>
	I0907 00:54:21.112830  173060 main.go:141] libmachine: (embed-certs-631721)     </disk>
	I0907 00:54:21.112884  173060 main.go:141] libmachine: (embed-certs-631721)     <disk type='file' device='disk'>
	I0907 00:54:21.112912  173060 main.go:141] libmachine: (embed-certs-631721)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 00:54:21.112973  173060 main.go:141] libmachine: (embed-certs-631721)       <source file='/home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/embed-certs-631721.rawdisk'/>
	I0907 00:54:21.113043  173060 main.go:141] libmachine: (embed-certs-631721)       <target dev='hda' bus='virtio'/>
	I0907 00:54:21.113059  173060 main.go:141] libmachine: (embed-certs-631721)     </disk>
	I0907 00:54:21.113066  173060 main.go:141] libmachine: (embed-certs-631721)     <interface type='network'>
	I0907 00:54:21.113076  173060 main.go:141] libmachine: (embed-certs-631721)       <source network='mk-embed-certs-631721'/>
	I0907 00:54:21.113093  173060 main.go:141] libmachine: (embed-certs-631721)       <model type='virtio'/>
	I0907 00:54:21.113103  173060 main.go:141] libmachine: (embed-certs-631721)     </interface>
	I0907 00:54:21.113115  173060 main.go:141] libmachine: (embed-certs-631721)     <interface type='network'>
	I0907 00:54:21.113127  173060 main.go:141] libmachine: (embed-certs-631721)       <source network='default'/>
	I0907 00:54:21.113135  173060 main.go:141] libmachine: (embed-certs-631721)       <model type='virtio'/>
	I0907 00:54:21.113147  173060 main.go:141] libmachine: (embed-certs-631721)     </interface>
	I0907 00:54:21.113155  173060 main.go:141] libmachine: (embed-certs-631721)     <serial type='pty'>
	I0907 00:54:21.113163  173060 main.go:141] libmachine: (embed-certs-631721)       <target port='0'/>
	I0907 00:54:21.113173  173060 main.go:141] libmachine: (embed-certs-631721)     </serial>
	I0907 00:54:21.113195  173060 main.go:141] libmachine: (embed-certs-631721)     <console type='pty'>
	I0907 00:54:21.113215  173060 main.go:141] libmachine: (embed-certs-631721)       <target type='serial' port='0'/>
	I0907 00:54:21.113260  173060 main.go:141] libmachine: (embed-certs-631721)     </console>
	I0907 00:54:21.113275  173060 main.go:141] libmachine: (embed-certs-631721)     <rng model='virtio'>
	I0907 00:54:21.113286  173060 main.go:141] libmachine: (embed-certs-631721)       <backend model='random'>/dev/random</backend>
	I0907 00:54:21.113292  173060 main.go:141] libmachine: (embed-certs-631721)     </rng>
	I0907 00:54:21.113298  173060 main.go:141] libmachine: (embed-certs-631721)     
	I0907 00:54:21.113303  173060 main.go:141] libmachine: (embed-certs-631721)     
	I0907 00:54:21.113309  173060 main.go:141] libmachine: (embed-certs-631721)   </devices>
	I0907 00:54:21.113323  173060 main.go:141] libmachine: (embed-certs-631721) </domain>
	I0907 00:54:21.113365  173060 main.go:141] libmachine: (embed-certs-631721) 
	I0907 00:54:21.118648  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:2b:b9:85 in network default
	I0907 00:54:21.119472  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:21.119514  173060 main.go:141] libmachine: (embed-certs-631721) starting domain...
	I0907 00:54:21.119531  173060 main.go:141] libmachine: (embed-certs-631721) ensuring networks are active...
	I0907 00:54:21.120634  173060 main.go:141] libmachine: (embed-certs-631721) Ensuring network default is active
	I0907 00:54:21.121163  173060 main.go:141] libmachine: (embed-certs-631721) Ensuring network mk-embed-certs-631721 is active
	I0907 00:54:21.122323  173060 main.go:141] libmachine: (embed-certs-631721) getting domain XML...
	I0907 00:54:21.123383  173060 main.go:141] libmachine: (embed-certs-631721) creating domain...
	I0907 00:54:21.563632  173060 main.go:141] libmachine: (embed-certs-631721) waiting for IP...
	I0907 00:54:21.564616  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:21.565318  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:21.565392  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.565326  173082 retry.go:31] will retry after 195.921329ms: waiting for domain to come up
	I0907 00:54:21.763207  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:21.763948  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:21.764032  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.763937  173082 retry.go:31] will retry after 294.784697ms: waiting for domain to come up
	I0907 00:54:22.062143  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:22.062918  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:22.062949  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:22.062900  173082 retry.go:31] will retry after 459.037074ms: waiting for domain to come up
	I0907 00:54:22.523188  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:22.523775  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:22.523812  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:22.523763  173082 retry.go:31] will retry after 405.367656ms: waiting for domain to come up
	I0907 00:54:22.930563  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:22.931041  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:22.931103  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:22.931007  173082 retry.go:31] will retry after 511.39893ms: waiting for domain to come up
	I0907 00:54:23.444072  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:23.444726  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:23.444791  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:23.444685  173082 retry.go:31] will retry after 914.257522ms: waiting for domain to come up
	I0907 00:54:24.361048  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:24.361684  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:24.361733  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:24.361677  173082 retry.go:31] will retry after 946.694327ms: waiting for domain to come up
	W0907 00:54:21.809161  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	W0907 00:54:24.310693  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	I0907 00:54:21.266456  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.0: (1.867174815s)
	I0907 00:54:21.266491  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 from cache
	I0907 00:54:21.266521  172493 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.0
	I0907 00:54:21.266574  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.0
	I0907 00:54:23.454722  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.0: (2.188117854s)
	I0907 00:54:23.454761  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 from cache
	I0907 00:54:23.454789  172493 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I0907 00:54:23.454844  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I0907 00:54:21.363492  172686 main.go:141] libmachine: (pause-257218) Calling .GetIP
	I0907 00:54:21.366727  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:21.367266  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:21.367293  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:21.367562  172686 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:54:21.374396  172686 kubeadm.go:875] updating cluster {Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0907 00:54:21.374602  172686 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:54:21.374671  172686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:54:21.429781  172686 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:54:21.429807  172686 crio.go:433] Images already preloaded, skipping extraction
	I0907 00:54:21.429860  172686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:54:21.472699  172686 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:54:21.472729  172686 cache_images.go:85] Images are preloaded, skipping loading
	I0907 00:54:21.472739  172686 kubeadm.go:926] updating node { 192.168.61.18 8443 v1.34.0 crio true true} ...
	I0907 00:54:21.472882  172686 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-257218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0907 00:54:21.472965  172686 ssh_runner.go:195] Run: crio config
	I0907 00:54:21.533820  172686 cni.go:84] Creating CNI manager for ""
	I0907 00:54:21.533853  172686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:54:21.533868  172686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0907 00:54:21.533904  172686 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.18 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-257218 NodeName:pause-257218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:54:21.534080  172686 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-257218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:54:21.534168  172686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0907 00:54:21.550280  172686 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:54:21.550360  172686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:54:21.564776  172686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0907 00:54:21.589409  172686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:54:21.613579  172686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0907 00:54:21.637425  172686 ssh_runner.go:195] Run: grep 192.168.61.18	control-plane.minikube.internal$ /etc/hosts
	I0907 00:54:21.642426  172686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:54:21.817118  172686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:54:21.842175  172686 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218 for IP: 192.168.61.18
	I0907 00:54:21.842225  172686 certs.go:194] generating shared ca certs ...
	I0907 00:54:21.842249  172686 certs.go:226] acquiring lock for ca certs: {Name:mk640ab940eb4d822d1f15a5cd2b466b6472cad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:54:21.842471  172686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key
	I0907 00:54:21.842540  172686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key
	I0907 00:54:21.842555  172686 certs.go:256] generating profile certs ...
	I0907 00:54:21.842698  172686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/client.key
	I0907 00:54:21.842794  172686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.key.8978653a
	I0907 00:54:21.842864  172686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.key
	I0907 00:54:21.843034  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem (1338 bytes)
	W0907 00:54:21.843082  172686 certs.go:480] ignoring /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025_empty.pem, impossibly tiny 0 bytes
	I0907 00:54:21.843094  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:54:21.843127  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:54:21.843180  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:54:21.843222  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem (1679 bytes)
	I0907 00:54:21.843324  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem (1708 bytes)
	I0907 00:54:21.844305  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:54:21.884263  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:54:21.920899  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:54:21.959818  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:54:22.000573  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0907 00:54:22.042214  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:54:22.087526  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:54:22.127082  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0907 00:54:22.167626  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem --> /usr/share/ca-certificates/1330252.pem (1708 bytes)
	I0907 00:54:22.219441  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:54:22.268395  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem --> /usr/share/ca-certificates/133025.pem (1338 bytes)
	I0907 00:54:22.313752  172686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:54:22.346055  172686 ssh_runner.go:195] Run: openssl version
	I0907 00:54:22.356503  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1330252.pem && ln -fs /usr/share/ca-certificates/1330252.pem /etc/ssl/certs/1330252.pem"
	I0907 00:54:22.377790  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.385453  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:55 /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.385541  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.395095  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1330252.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:54:22.410292  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:54:22.430715  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.437507  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.437584  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.447530  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:54:22.463566  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/133025.pem && ln -fs /usr/share/ca-certificates/133025.pem /etc/ssl/certs/133025.pem"
	I0907 00:54:22.481035  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.488175  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:55 /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.488256  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.497643  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/133025.pem /etc/ssl/certs/51391683.0"
	I0907 00:54:22.517251  172686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0907 00:54:22.524869  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:54:22.534012  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:54:22.543613  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:54:22.553590  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:54:22.563379  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:54:22.573809  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:54:22.584248  172686 kubeadm.go:392] StartCluster: {Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:54:22.584421  172686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:54:22.584498  172686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:54:22.638417  172686 cri.go:89] found id: "0b0a42f2ca65e81c0f411c65c92440137ca07a2407ef0eb3e691b62c5d12b66f"
	I0907 00:54:22.638449  172686 cri.go:89] found id: "966fa4a120f98bd7a4a6478e2e1cec4e5d450ead219f4ff7dd12a392c8a76d90"
	I0907 00:54:22.638454  172686 cri.go:89] found id: "a747c18ebfeb177c131c243490f8f6d6402c46563aed80e9ba358c33e76813a9"
	I0907 00:54:22.638458  172686 cri.go:89] found id: "4fcd08ae154de04c096b0c062d36483fc0747c79c830b6e25fa8c194e05f527e"
	I0907 00:54:22.638463  172686 cri.go:89] found id: "fb07fafd721f657571f03b4a461ca2752eda5ec57fa006ac6fa8cc7221c98208"
	I0907 00:54:22.638468  172686 cri.go:89] found id: "4c0797ea3b965c72d4f8babfd39a3fde2c0284ff4749f7c493fb694704ba16d3"
	I0907 00:54:22.638472  172686 cri.go:89] found id: ""
	I0907 00:54:22.638530  172686 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-257218 -n pause-257218
helpers_test.go:269: (dbg) Run:  kubectl --context pause-257218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-257218 -n pause-257218
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-257218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-257218 logs -n 25: (1.979470691s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-513546 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ start   │ -p cert-options-794643 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │ 07 Sep 25 00:53 UTC │
	│ ssh     │ -p cilium-513546 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo containerd config dump                                                                                                                                                                                                │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ ssh     │ -p cilium-513546 sudo crio config                                                                                                                                                                                                           │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ delete  │ -p cilium-513546                                                                                                                                                                                                                            │ cilium-513546          │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │ 07 Sep 25 00:52 UTC │
	│ start   │ -p old-k8s-version-477870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-477870 │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │                     │
	│ start   │ -p running-upgrade-239150 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                      │ running-upgrade-239150 │ jenkins │ v1.36.0 │ 07 Sep 25 00:52 UTC │ 07 Sep 25 00:54 UTC │
	│ ssh     │ cert-options-794643 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:53 UTC │
	│ ssh     │ -p cert-options-794643 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:53 UTC │
	│ delete  │ -p cert-options-794643                                                                                                                                                                                                                      │ cert-options-794643    │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:53 UTC │
	│ start   │ -p no-preload-752207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-752207      │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │                     │
	│ start   │ -p pause-257218 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-257218           │ jenkins │ v1.36.0 │ 07 Sep 25 00:53 UTC │ 07 Sep 25 00:54 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-239150 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ running-upgrade-239150 │ jenkins │ v1.36.0 │ 07 Sep 25 00:54 UTC │                     │
	│ delete  │ -p running-upgrade-239150                                                                                                                                                                                                                   │ running-upgrade-239150 │ jenkins │ v1.36.0 │ 07 Sep 25 00:54 UTC │ 07 Sep 25 00:54 UTC │
	│ start   │ -p embed-certs-631721 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-631721     │ jenkins │ v1.36.0 │ 07 Sep 25 00:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/07 00:54:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0907 00:54:20.136973  173060 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:54:20.137308  173060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:54:20.137320  173060 out.go:374] Setting ErrFile to fd 2...
	I0907 00:54:20.137325  173060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:54:20.137572  173060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:54:20.138250  173060 out.go:368] Setting JSON to false
	I0907 00:54:20.139226  173060 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5803,"bootTime":1757200657,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:54:20.139341  173060 start.go:140] virtualization: kvm guest
	I0907 00:54:20.141482  173060 out.go:179] * [embed-certs-631721] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:54:20.143142  173060 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:54:20.143169  173060 notify.go:220] Checking for updates...
	I0907 00:54:20.145717  173060 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:54:20.146978  173060 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0907 00:54:20.148190  173060 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:54:20.149514  173060 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:54:20.150901  173060 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:54:20.152580  173060 config.go:182] Loaded profile config "no-preload-752207": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:54:20.152736  173060 config.go:182] Loaded profile config "old-k8s-version-477870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0907 00:54:20.152950  173060 config.go:182] Loaded profile config "pause-257218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:54:20.153087  173060 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:54:20.199291  173060 out.go:179] * Using the kvm2 driver based on user configuration
	I0907 00:54:20.200650  173060 start.go:304] selected driver: kvm2
	I0907 00:54:20.200665  173060 start.go:918] validating driver "kvm2" against <nil>
	I0907 00:54:20.200691  173060 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:54:20.201439  173060 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:54:20.201525  173060 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21132-128697/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0907 00:54:20.218923  173060 install.go:137] /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0907 00:54:20.218993  173060 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0907 00:54:20.219362  173060 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0907 00:54:20.219409  173060 cni.go:84] Creating CNI manager for ""
	I0907 00:54:20.219472  173060 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:54:20.219483  173060 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0907 00:54:20.219558  173060 start.go:348] cluster config:
	{Name:embed-certs-631721 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-631721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:54:20.219688  173060 iso.go:125] acquiring lock: {Name:mk3bd5f7fbe7836651644a94b41f2b6111c9b69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0907 00:54:20.221825  173060 out.go:179] * Starting "embed-certs-631721" primary control-plane node in "embed-certs-631721" cluster
	I0907 00:54:20.223219  173060 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:54:20.223276  173060 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0907 00:54:20.223286  173060 cache.go:58] Caching tarball of preloaded images
	I0907 00:54:20.223382  173060 preload.go:172] Found /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0907 00:54:20.223398  173060 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0907 00:54:20.223513  173060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/embed-certs-631721/config.json ...
	I0907 00:54:20.223539  173060 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/embed-certs-631721/config.json: {Name:mkbf0015122395e862c1a391b190dc0b3b70920f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:54:20.223709  173060 start.go:360] acquireMachinesLock for embed-certs-631721: {Name:mk3b58ef42f26d446b63d531f457f6ac8953e3f5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0907 00:54:20.223758  173060 start.go:364] duration metric: took 31.787µs to acquireMachinesLock for "embed-certs-631721"
	I0907 00:54:20.223782  173060 start.go:93] Provisioning new machine with config: &{Name:embed-certs-631721 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.0 ClusterName:embed-certs-631721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0907 00:54:20.223854  173060 start.go:125] createHost starting for "" (driver="kvm2")
	W0907 00:54:17.318958  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	W0907 00:54:19.808686  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	I0907 00:54:17.223487  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (3.82491204s)
	I0907 00:54:17.223528  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I0907 00:54:17.223577  172493 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0907 00:54:17.223639  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0
	I0907 00:54:19.399112  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.0: (2.175429854s)
	I0907 00:54:19.399152  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 from cache
	I0907 00:54:19.399186  172493 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0907 00:54:19.399246  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.0
	I0907 00:54:21.192874  172686 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.699411667s)
	I0907 00:54:21.192913  172686 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0907 00:54:21.192967  172686 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0907 00:54:21.199990  172686 start.go:563] Will wait 60s for crictl version
	I0907 00:54:21.200089  172686 ssh_runner.go:195] Run: which crictl
	I0907 00:54:21.205519  172686 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0907 00:54:21.262617  172686 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0907 00:54:21.262716  172686 ssh_runner.go:195] Run: crio --version
	I0907 00:54:21.313114  172686 ssh_runner.go:195] Run: crio --version
	I0907 00:54:21.362008  172686 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0907 00:54:20.225733  173060 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0907 00:54:20.225911  173060 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:54:20.225974  173060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:54:20.241826  173060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I0907 00:54:20.242319  173060 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:54:20.242890  173060 main.go:141] libmachine: Using API Version  1
	I0907 00:54:20.242918  173060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:54:20.243295  173060 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:54:20.243534  173060 main.go:141] libmachine: (embed-certs-631721) Calling .GetMachineName
	I0907 00:54:20.243714  173060 main.go:141] libmachine: (embed-certs-631721) Calling .DriverName
	I0907 00:54:20.243853  173060 start.go:159] libmachine.API.Create for "embed-certs-631721" (driver="kvm2")
	I0907 00:54:20.243882  173060 client.go:168] LocalClient.Create starting
	I0907 00:54:20.243920  173060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem
	I0907 00:54:20.243966  173060 main.go:141] libmachine: Decoding PEM data...
	I0907 00:54:20.243991  173060 main.go:141] libmachine: Parsing certificate...
	I0907 00:54:20.244069  173060 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem
	I0907 00:54:20.244098  173060 main.go:141] libmachine: Decoding PEM data...
	I0907 00:54:20.244117  173060 main.go:141] libmachine: Parsing certificate...
	I0907 00:54:20.244139  173060 main.go:141] libmachine: Running pre-create checks...
	I0907 00:54:20.244156  173060 main.go:141] libmachine: (embed-certs-631721) Calling .PreCreateCheck
	I0907 00:54:20.244513  173060 main.go:141] libmachine: (embed-certs-631721) Calling .GetConfigRaw
	I0907 00:54:20.244972  173060 main.go:141] libmachine: Creating machine...
	I0907 00:54:20.244992  173060 main.go:141] libmachine: (embed-certs-631721) Calling .Create
	I0907 00:54:20.245131  173060 main.go:141] libmachine: (embed-certs-631721) creating KVM machine...
	I0907 00:54:20.245152  173060 main.go:141] libmachine: (embed-certs-631721) creating network...
	I0907 00:54:20.246432  173060 main.go:141] libmachine: (embed-certs-631721) DBG | found existing default KVM network
	I0907 00:54:20.247680  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:20.247545  173082 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123b10}
	I0907 00:54:20.247751  173060 main.go:141] libmachine: (embed-certs-631721) DBG | created network xml: 
	I0907 00:54:20.247776  173060 main.go:141] libmachine: (embed-certs-631721) DBG | <network>
	I0907 00:54:20.247786  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   <name>mk-embed-certs-631721</name>
	I0907 00:54:20.247796  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   <dns enable='no'/>
	I0907 00:54:20.247804  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   
	I0907 00:54:20.247818  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0907 00:54:20.247827  173060 main.go:141] libmachine: (embed-certs-631721) DBG |     <dhcp>
	I0907 00:54:20.247847  173060 main.go:141] libmachine: (embed-certs-631721) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0907 00:54:20.247868  173060 main.go:141] libmachine: (embed-certs-631721) DBG |     </dhcp>
	I0907 00:54:20.247875  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   </ip>
	I0907 00:54:20.247882  173060 main.go:141] libmachine: (embed-certs-631721) DBG |   
	I0907 00:54:20.247893  173060 main.go:141] libmachine: (embed-certs-631721) DBG | </network>
	I0907 00:54:20.247903  173060 main.go:141] libmachine: (embed-certs-631721) DBG | 
	I0907 00:54:20.253959  173060 main.go:141] libmachine: (embed-certs-631721) DBG | trying to create private KVM network mk-embed-certs-631721 192.168.39.0/24...
	I0907 00:54:20.344363  173060 main.go:141] libmachine: (embed-certs-631721) DBG | private KVM network mk-embed-certs-631721 192.168.39.0/24 created
	I0907 00:54:20.344421  173060 main.go:141] libmachine: (embed-certs-631721) setting up store path in /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721 ...
	I0907 00:54:20.344437  173060 main.go:141] libmachine: (embed-certs-631721) building disk image from file:///home/jenkins/minikube-integration/21132-128697/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0907 00:54:20.344453  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:20.344329  173082 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:54:20.344733  173060 main.go:141] libmachine: (embed-certs-631721) Downloading /home/jenkins/minikube-integration/21132-128697/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21132-128697/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso...
	I0907 00:54:20.682106  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:20.681922  173082 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/id_rsa...
	I0907 00:54:21.110014  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.109848  173082 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/embed-certs-631721.rawdisk...
	I0907 00:54:21.110051  173060 main.go:141] libmachine: (embed-certs-631721) DBG | Writing magic tar header
	I0907 00:54:21.110074  173060 main.go:141] libmachine: (embed-certs-631721) DBG | Writing SSH key tar header
	I0907 00:54:21.110086  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.110015  173082 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721 ...
	I0907 00:54:21.110731  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721
	I0907 00:54:21.110761  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube/machines
	I0907 00:54:21.110776  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721 (perms=drwx------)
	I0907 00:54:21.110786  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:54:21.110808  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21132-128697
	I0907 00:54:21.110818  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0907 00:54:21.110827  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube/machines (perms=drwxr-xr-x)
	I0907 00:54:21.110870  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697/.minikube (perms=drwxr-xr-x)
	I0907 00:54:21.110886  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration/21132-128697 (perms=drwxrwxr-x)
	I0907 00:54:21.110895  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home/jenkins
	I0907 00:54:21.110905  173060 main.go:141] libmachine: (embed-certs-631721) DBG | checking permissions on dir: /home
	I0907 00:54:21.110917  173060 main.go:141] libmachine: (embed-certs-631721) DBG | skipping /home - not owner
	I0907 00:54:21.110930  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0907 00:54:21.110950  173060 main.go:141] libmachine: (embed-certs-631721) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0907 00:54:21.110961  173060 main.go:141] libmachine: (embed-certs-631721) creating domain...
	I0907 00:54:21.112499  173060 main.go:141] libmachine: (embed-certs-631721) define libvirt domain using xml: 
	I0907 00:54:21.112534  173060 main.go:141] libmachine: (embed-certs-631721) <domain type='kvm'>
	I0907 00:54:21.112550  173060 main.go:141] libmachine: (embed-certs-631721)   <name>embed-certs-631721</name>
	I0907 00:54:21.112561  173060 main.go:141] libmachine: (embed-certs-631721)   <memory unit='MiB'>3072</memory>
	I0907 00:54:21.112569  173060 main.go:141] libmachine: (embed-certs-631721)   <vcpu>2</vcpu>
	I0907 00:54:21.112576  173060 main.go:141] libmachine: (embed-certs-631721)   <features>
	I0907 00:54:21.112591  173060 main.go:141] libmachine: (embed-certs-631721)     <acpi/>
	I0907 00:54:21.112601  173060 main.go:141] libmachine: (embed-certs-631721)     <apic/>
	I0907 00:54:21.112621  173060 main.go:141] libmachine: (embed-certs-631721)     <pae/>
	I0907 00:54:21.112638  173060 main.go:141] libmachine: (embed-certs-631721)     
	I0907 00:54:21.112650  173060 main.go:141] libmachine: (embed-certs-631721)   </features>
	I0907 00:54:21.112657  173060 main.go:141] libmachine: (embed-certs-631721)   <cpu mode='host-passthrough'>
	I0907 00:54:21.112665  173060 main.go:141] libmachine: (embed-certs-631721)   
	I0907 00:54:21.112677  173060 main.go:141] libmachine: (embed-certs-631721)   </cpu>
	I0907 00:54:21.112685  173060 main.go:141] libmachine: (embed-certs-631721)   <os>
	I0907 00:54:21.112695  173060 main.go:141] libmachine: (embed-certs-631721)     <type>hvm</type>
	I0907 00:54:21.112704  173060 main.go:141] libmachine: (embed-certs-631721)     <boot dev='cdrom'/>
	I0907 00:54:21.112714  173060 main.go:141] libmachine: (embed-certs-631721)     <boot dev='hd'/>
	I0907 00:54:21.112723  173060 main.go:141] libmachine: (embed-certs-631721)     <bootmenu enable='no'/>
	I0907 00:54:21.112732  173060 main.go:141] libmachine: (embed-certs-631721)   </os>
	I0907 00:54:21.112761  173060 main.go:141] libmachine: (embed-certs-631721)   <devices>
	I0907 00:54:21.112777  173060 main.go:141] libmachine: (embed-certs-631721)     <disk type='file' device='cdrom'>
	I0907 00:54:21.112795  173060 main.go:141] libmachine: (embed-certs-631721)       <source file='/home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/boot2docker.iso'/>
	I0907 00:54:21.112807  173060 main.go:141] libmachine: (embed-certs-631721)       <target dev='hdc' bus='scsi'/>
	I0907 00:54:21.112816  173060 main.go:141] libmachine: (embed-certs-631721)       <readonly/>
	I0907 00:54:21.112830  173060 main.go:141] libmachine: (embed-certs-631721)     </disk>
	I0907 00:54:21.112884  173060 main.go:141] libmachine: (embed-certs-631721)     <disk type='file' device='disk'>
	I0907 00:54:21.112912  173060 main.go:141] libmachine: (embed-certs-631721)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0907 00:54:21.112973  173060 main.go:141] libmachine: (embed-certs-631721)       <source file='/home/jenkins/minikube-integration/21132-128697/.minikube/machines/embed-certs-631721/embed-certs-631721.rawdisk'/>
	I0907 00:54:21.113043  173060 main.go:141] libmachine: (embed-certs-631721)       <target dev='hda' bus='virtio'/>
	I0907 00:54:21.113059  173060 main.go:141] libmachine: (embed-certs-631721)     </disk>
	I0907 00:54:21.113066  173060 main.go:141] libmachine: (embed-certs-631721)     <interface type='network'>
	I0907 00:54:21.113076  173060 main.go:141] libmachine: (embed-certs-631721)       <source network='mk-embed-certs-631721'/>
	I0907 00:54:21.113093  173060 main.go:141] libmachine: (embed-certs-631721)       <model type='virtio'/>
	I0907 00:54:21.113103  173060 main.go:141] libmachine: (embed-certs-631721)     </interface>
	I0907 00:54:21.113115  173060 main.go:141] libmachine: (embed-certs-631721)     <interface type='network'>
	I0907 00:54:21.113127  173060 main.go:141] libmachine: (embed-certs-631721)       <source network='default'/>
	I0907 00:54:21.113135  173060 main.go:141] libmachine: (embed-certs-631721)       <model type='virtio'/>
	I0907 00:54:21.113147  173060 main.go:141] libmachine: (embed-certs-631721)     </interface>
	I0907 00:54:21.113155  173060 main.go:141] libmachine: (embed-certs-631721)     <serial type='pty'>
	I0907 00:54:21.113163  173060 main.go:141] libmachine: (embed-certs-631721)       <target port='0'/>
	I0907 00:54:21.113173  173060 main.go:141] libmachine: (embed-certs-631721)     </serial>
	I0907 00:54:21.113195  173060 main.go:141] libmachine: (embed-certs-631721)     <console type='pty'>
	I0907 00:54:21.113215  173060 main.go:141] libmachine: (embed-certs-631721)       <target type='serial' port='0'/>
	I0907 00:54:21.113260  173060 main.go:141] libmachine: (embed-certs-631721)     </console>
	I0907 00:54:21.113275  173060 main.go:141] libmachine: (embed-certs-631721)     <rng model='virtio'>
	I0907 00:54:21.113286  173060 main.go:141] libmachine: (embed-certs-631721)       <backend model='random'>/dev/random</backend>
	I0907 00:54:21.113292  173060 main.go:141] libmachine: (embed-certs-631721)     </rng>
	I0907 00:54:21.113298  173060 main.go:141] libmachine: (embed-certs-631721)     
	I0907 00:54:21.113303  173060 main.go:141] libmachine: (embed-certs-631721)     
	I0907 00:54:21.113309  173060 main.go:141] libmachine: (embed-certs-631721)   </devices>
	I0907 00:54:21.113323  173060 main.go:141] libmachine: (embed-certs-631721) </domain>
	I0907 00:54:21.113365  173060 main.go:141] libmachine: (embed-certs-631721) 
	I0907 00:54:21.118648  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:2b:b9:85 in network default
	I0907 00:54:21.119472  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:21.119514  173060 main.go:141] libmachine: (embed-certs-631721) starting domain...
	I0907 00:54:21.119531  173060 main.go:141] libmachine: (embed-certs-631721) ensuring networks are active...
	I0907 00:54:21.120634  173060 main.go:141] libmachine: (embed-certs-631721) Ensuring network default is active
	I0907 00:54:21.121163  173060 main.go:141] libmachine: (embed-certs-631721) Ensuring network mk-embed-certs-631721 is active
	I0907 00:54:21.122323  173060 main.go:141] libmachine: (embed-certs-631721) getting domain XML...
	I0907 00:54:21.123383  173060 main.go:141] libmachine: (embed-certs-631721) creating domain...
	I0907 00:54:21.563632  173060 main.go:141] libmachine: (embed-certs-631721) waiting for IP...
	I0907 00:54:21.564616  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:21.565318  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:21.565392  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.565326  173082 retry.go:31] will retry after 195.921329ms: waiting for domain to come up
	I0907 00:54:21.763207  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:21.763948  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:21.764032  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:21.763937  173082 retry.go:31] will retry after 294.784697ms: waiting for domain to come up
	I0907 00:54:22.062143  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:22.062918  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:22.062949  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:22.062900  173082 retry.go:31] will retry after 459.037074ms: waiting for domain to come up
	I0907 00:54:22.523188  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:22.523775  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:22.523812  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:22.523763  173082 retry.go:31] will retry after 405.367656ms: waiting for domain to come up
	I0907 00:54:22.930563  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:22.931041  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:22.931103  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:22.931007  173082 retry.go:31] will retry after 511.39893ms: waiting for domain to come up
	I0907 00:54:23.444072  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:23.444726  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:23.444791  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:23.444685  173082 retry.go:31] will retry after 914.257522ms: waiting for domain to come up
	I0907 00:54:24.361048  173060 main.go:141] libmachine: (embed-certs-631721) DBG | domain embed-certs-631721 has defined MAC address 52:54:00:af:8c:d5 in network mk-embed-certs-631721
	I0907 00:54:24.361684  173060 main.go:141] libmachine: (embed-certs-631721) DBG | unable to find current IP address of domain embed-certs-631721 in network mk-embed-certs-631721
	I0907 00:54:24.361733  173060 main.go:141] libmachine: (embed-certs-631721) DBG | I0907 00:54:24.361677  173082 retry.go:31] will retry after 946.694327ms: waiting for domain to come up
	W0907 00:54:21.809161  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	W0907 00:54:24.310693  171646 pod_ready.go:104] pod "coredns-5dd5756b68-j7pc8" is not "Ready", error: <nil>
	I0907 00:54:21.266456  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.0: (1.867174815s)
	I0907 00:54:21.266491  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 from cache
	I0907 00:54:21.266521  172493 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.0
	I0907 00:54:21.266574  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.0
	I0907 00:54:23.454722  172493 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.0: (2.188117854s)
	I0907 00:54:23.454761  172493 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21132-128697/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 from cache
	I0907 00:54:23.454789  172493 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I0907 00:54:23.454844  172493 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I0907 00:54:21.363492  172686 main.go:141] libmachine: (pause-257218) Calling .GetIP
	I0907 00:54:21.366727  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:21.367266  172686 main.go:141] libmachine: (pause-257218) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:82:07", ip: ""} in network mk-pause-257218: {Iface:virbr1 ExpiryTime:2025-09-07 01:52:27 +0000 UTC Type:0 Mac:52:54:00:7d:82:07 Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:pause-257218 Clientid:01:52:54:00:7d:82:07}
	I0907 00:54:21.367293  172686 main.go:141] libmachine: (pause-257218) DBG | domain pause-257218 has defined IP address 192.168.61.18 and MAC address 52:54:00:7d:82:07 in network mk-pause-257218
	I0907 00:54:21.367562  172686 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0907 00:54:21.374396  172686 kubeadm.go:875] updating cluster {Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0907 00:54:21.374602  172686 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0907 00:54:21.374671  172686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:54:21.429781  172686 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:54:21.429807  172686 crio.go:433] Images already preloaded, skipping extraction
	I0907 00:54:21.429860  172686 ssh_runner.go:195] Run: sudo crictl images --output json
	I0907 00:54:21.472699  172686 crio.go:514] all images are preloaded for cri-o runtime.
	I0907 00:54:21.472729  172686 cache_images.go:85] Images are preloaded, skipping loading
	I0907 00:54:21.472739  172686 kubeadm.go:926] updating node { 192.168.61.18 8443 v1.34.0 crio true true} ...
	I0907 00:54:21.472882  172686 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-257218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0907 00:54:21.472965  172686 ssh_runner.go:195] Run: crio config
	I0907 00:54:21.533820  172686 cni.go:84] Creating CNI manager for ""
	I0907 00:54:21.533853  172686 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0907 00:54:21.533868  172686 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0907 00:54:21.533904  172686 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.18 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-257218 NodeName:pause-257218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0907 00:54:21.534080  172686 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-257218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0907 00:54:21.534168  172686 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0907 00:54:21.550280  172686 binaries.go:44] Found k8s binaries, skipping transfer
	I0907 00:54:21.550360  172686 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0907 00:54:21.564776  172686 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0907 00:54:21.589409  172686 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0907 00:54:21.613579  172686 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0907 00:54:21.637425  172686 ssh_runner.go:195] Run: grep 192.168.61.18	control-plane.minikube.internal$ /etc/hosts
	I0907 00:54:21.642426  172686 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0907 00:54:21.817118  172686 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0907 00:54:21.842175  172686 certs.go:68] Setting up /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218 for IP: 192.168.61.18
	I0907 00:54:21.842225  172686 certs.go:194] generating shared ca certs ...
	I0907 00:54:21.842249  172686 certs.go:226] acquiring lock for ca certs: {Name:mk640ab940eb4d822d1f15a5cd2b466b6472cad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0907 00:54:21.842471  172686 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key
	I0907 00:54:21.842540  172686 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key
	I0907 00:54:21.842555  172686 certs.go:256] generating profile certs ...
	I0907 00:54:21.842698  172686 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/client.key
	I0907 00:54:21.842794  172686 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.key.8978653a
	I0907 00:54:21.842864  172686 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.key
	I0907 00:54:21.843034  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem (1338 bytes)
	W0907 00:54:21.843082  172686 certs.go:480] ignoring /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025_empty.pem, impossibly tiny 0 bytes
	I0907 00:54:21.843094  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca-key.pem (1675 bytes)
	I0907 00:54:21.843127  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/ca.pem (1082 bytes)
	I0907 00:54:21.843180  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/cert.pem (1123 bytes)
	I0907 00:54:21.843222  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/certs/key.pem (1679 bytes)
	I0907 00:54:21.843324  172686 certs.go:484] found cert: /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem (1708 bytes)
	I0907 00:54:21.844305  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0907 00:54:21.884263  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0907 00:54:21.920899  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0907 00:54:21.959818  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0907 00:54:22.000573  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0907 00:54:22.042214  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0907 00:54:22.087526  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0907 00:54:22.127082  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/pause-257218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0907 00:54:22.167626  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/ssl/certs/1330252.pem --> /usr/share/ca-certificates/1330252.pem (1708 bytes)
	I0907 00:54:22.219441  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0907 00:54:22.268395  172686 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21132-128697/.minikube/certs/133025.pem --> /usr/share/ca-certificates/133025.pem (1338 bytes)
	I0907 00:54:22.313752  172686 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0907 00:54:22.346055  172686 ssh_runner.go:195] Run: openssl version
	I0907 00:54:22.356503  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1330252.pem && ln -fs /usr/share/ca-certificates/1330252.pem /etc/ssl/certs/1330252.pem"
	I0907 00:54:22.377790  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.385453  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:55 /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.385541  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1330252.pem
	I0907 00:54:22.395095  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1330252.pem /etc/ssl/certs/3ec20f2e.0"
	I0907 00:54:22.410292  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0907 00:54:22.430715  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.437507  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.437584  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0907 00:54:22.447530  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0907 00:54:22.463566  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/133025.pem && ln -fs /usr/share/ca-certificates/133025.pem /etc/ssl/certs/133025.pem"
	I0907 00:54:22.481035  172686 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.488175  172686 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:55 /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.488256  172686 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/133025.pem
	I0907 00:54:22.497643  172686 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/133025.pem /etc/ssl/certs/51391683.0"
	I0907 00:54:22.517251  172686 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0907 00:54:22.524869  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0907 00:54:22.534012  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0907 00:54:22.543613  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0907 00:54:22.553590  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0907 00:54:22.563379  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0907 00:54:22.573809  172686 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0907 00:54:22.584248  172686 kubeadm.go:392] StartCluster: {Name:pause-257218 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-257218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0907 00:54:22.584421  172686 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0907 00:54:22.584498  172686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0907 00:54:22.638417  172686 cri.go:89] found id: "0b0a42f2ca65e81c0f411c65c92440137ca07a2407ef0eb3e691b62c5d12b66f"
	I0907 00:54:22.638449  172686 cri.go:89] found id: "966fa4a120f98bd7a4a6478e2e1cec4e5d450ead219f4ff7dd12a392c8a76d90"
	I0907 00:54:22.638454  172686 cri.go:89] found id: "a747c18ebfeb177c131c243490f8f6d6402c46563aed80e9ba358c33e76813a9"
	I0907 00:54:22.638458  172686 cri.go:89] found id: "4fcd08ae154de04c096b0c062d36483fc0747c79c830b6e25fa8c194e05f527e"
	I0907 00:54:22.638463  172686 cri.go:89] found id: "fb07fafd721f657571f03b4a461ca2752eda5ec57fa006ac6fa8cc7221c98208"
	I0907 00:54:22.638468  172686 cri.go:89] found id: "4c0797ea3b965c72d4f8babfd39a3fde2c0284ff4749f7c493fb694704ba16d3"
	I0907 00:54:22.638472  172686 cri.go:89] found id: ""
	I0907 00:54:22.638530  172686 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-257218 -n pause-257218
helpers_test.go:269: (dbg) Run:  kubectl --context pause-257218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.12s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.34
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.28
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 4.25
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 1.77
22 TestOffline 136.76
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 160.06
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.59
35 TestAddons/parallel/Registry 18.72
36 TestAddons/parallel/RegistryCreds 0.97
38 TestAddons/parallel/InspektorGadget 6.34
39 TestAddons/parallel/MetricsServer 6.82
41 TestAddons/parallel/CSI 47.7
42 TestAddons/parallel/Headlamp 20.58
43 TestAddons/parallel/CloudSpanner 7.12
44 TestAddons/parallel/LocalPath 53.32
45 TestAddons/parallel/NvidiaDevicePlugin 6.8
46 TestAddons/parallel/Yakd 12.14
48 TestAddons/StoppedEnableDisable 91.2
49 TestCertOptions 66
50 TestCertExpiration 333.81
52 TestForceSystemdFlag 99.8
53 TestForceSystemdEnv 47.59
55 TestKVMDriverInstallOrUpdate 1.4
59 TestErrorSpam/setup 44.89
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.84
62 TestErrorSpam/pause 1.82
63 TestErrorSpam/unpause 2.17
64 TestErrorSpam/stop 4.85
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 89.5
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 57.99
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
76 TestFunctional/serial/CacheCmd/cache/add_local 1.14
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 36.23
85 TestFunctional/serial/ComponentHealth 0.08
86 TestFunctional/serial/LogsCmd 1.58
87 TestFunctional/serial/LogsFileCmd 1.59
88 TestFunctional/serial/InvalidService 4.36
90 TestFunctional/parallel/ConfigCmd 0.42
91 TestFunctional/parallel/DashboardCmd 9.37
92 TestFunctional/parallel/DryRun 0.31
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.96
98 TestFunctional/parallel/ServiceCmdConnect 8.63
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 38.53
102 TestFunctional/parallel/SSHCmd 0.57
103 TestFunctional/parallel/CpCmd 1.62
104 TestFunctional/parallel/MySQL 21.78
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.62
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
114 TestFunctional/parallel/License 0.39
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
118 TestFunctional/parallel/Version/short 0.06
119 TestFunctional/parallel/Version/components 0.77
120 TestFunctional/parallel/ImageCommands/ImageListShort 1.33
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
124 TestFunctional/parallel/ImageCommands/ImageBuild 3.76
125 TestFunctional/parallel/ImageCommands/Setup 0.43
126 TestFunctional/parallel/ServiceCmd/DeployApp 19.25
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.15
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 7.78
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.88
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.54
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
143 TestFunctional/parallel/ServiceCmd/List 0.5
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
146 TestFunctional/parallel/ServiceCmd/Format 0.34
147 TestFunctional/parallel/ServiceCmd/URL 0.35
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
149 TestFunctional/parallel/MountCmd/any-port 10.73
150 TestFunctional/parallel/ProfileCmd/profile_list 0.37
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
152 TestFunctional/parallel/MountCmd/specific-port 1.7
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 242.47
162 TestMultiControlPlane/serial/DeployApp 6.25
163 TestMultiControlPlane/serial/PingHostFromPods 1.36
164 TestMultiControlPlane/serial/AddWorkerNode 51.45
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.98
167 TestMultiControlPlane/serial/CopyFile 14.3
168 TestMultiControlPlane/serial/StopSecondaryNode 91.62
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
170 TestMultiControlPlane/serial/RestartSecondaryNode 38
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.1
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 413.92
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.72
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
175 TestMultiControlPlane/serial/StopCluster 272.44
176 TestMultiControlPlane/serial/RestartCluster 133.5
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
178 TestMultiControlPlane/serial/AddSecondaryNode 94.54
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
183 TestJSONOutput/start/Command 83.93
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.86
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.72
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.37
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 92.84
215 TestMountStart/serial/StartWithMountFirst 26.65
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 30.35
218 TestMountStart/serial/VerifyMountSecond 0.41
219 TestMountStart/serial/DeleteFirst 0.58
220 TestMountStart/serial/VerifyMountPostDelete 0.41
221 TestMountStart/serial/Stop 1.33
222 TestMountStart/serial/RestartStopped 22.38
223 TestMountStart/serial/VerifyMountPostStop 0.4
226 TestMultiNode/serial/FreshStart2Nodes 112.48
227 TestMultiNode/serial/DeployApp2Nodes 5.69
228 TestMultiNode/serial/PingHostFrom2Pods 0.85
229 TestMultiNode/serial/AddNode 48.31
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.65
232 TestMultiNode/serial/CopyFile 7.82
233 TestMultiNode/serial/StopNode 3.25
234 TestMultiNode/serial/StartAfterStop 39.45
235 TestMultiNode/serial/RestartKeepsNodes 356.93
236 TestMultiNode/serial/DeleteNode 2.91
237 TestMultiNode/serial/StopMultiNode 181.71
238 TestMultiNode/serial/RestartMultiNode 91.67
239 TestMultiNode/serial/ValidateNameConflict 47.99
246 TestScheduledStopUnix 115.75
250 TestRunningBinaryUpgrade 187.63
252 TestKubernetesUpgrade 181.32
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 123.18
257 TestNoKubernetes/serial/StartWithStopK8s 16.2
258 TestNoKubernetes/serial/Start 48.01
259 TestStoppedBinaryUpgrade/Setup 0.49
260 TestStoppedBinaryUpgrade/Upgrade 150.33
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
262 TestNoKubernetes/serial/ProfileList 1.16
263 TestNoKubernetes/serial/Stop 1.31
264 TestNoKubernetes/serial/StartNoArgs 70.27
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
275 TestPause/serial/Start 114.54
283 TestNetworkPlugins/group/false 5.92
288 TestStartStop/group/old-k8s-version/serial/FirstStart 144.54
290 TestStartStop/group/no-preload/serial/FirstStart 89.62
293 TestStartStop/group/embed-certs/serial/FirstStart 89.32
294 TestStartStop/group/old-k8s-version/serial/DeployApp 12.52
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.49
297 TestStartStop/group/no-preload/serial/DeployApp 11.35
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
299 TestStartStop/group/old-k8s-version/serial/Stop 91.07
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
301 TestStartStop/group/no-preload/serial/Stop 91.36
302 TestStartStop/group/embed-certs/serial/DeployApp 10.32
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
304 TestStartStop/group/embed-certs/serial/Stop 91.12
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.57
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
309 TestStartStop/group/old-k8s-version/serial/SecondStart 48.45
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/no-preload/serial/SecondStart 72.66
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 16.01
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
314 TestStartStop/group/embed-certs/serial/SecondStart 53.69
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
317 TestStartStop/group/old-k8s-version/serial/Pause 3.6
319 TestStartStop/group/newest-cni/serial/FirstStart 64.92
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.44
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 71.05
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
325 TestStartStop/group/no-preload/serial/Pause 3.21
326 TestNetworkPlugins/group/auto/Start 121.9
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
330 TestStartStop/group/embed-certs/serial/Pause 4
331 TestNetworkPlugins/group/kindnet/Start 89.68
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
334 TestStartStop/group/newest-cni/serial/Stop 8.48
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
336 TestStartStop/group/newest-cni/serial/SecondStart 74.73
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
341 TestNetworkPlugins/group/calico/Start 104.95
342 TestNetworkPlugins/group/auto/KubeletFlags 0.23
343 TestNetworkPlugins/group/auto/NetCatPod 10.36
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
347 TestStartStop/group/newest-cni/serial/Pause 3.05
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/custom-flannel/Start 87.06
350 TestNetworkPlugins/group/auto/DNS 0.2
351 TestNetworkPlugins/group/auto/Localhost 0.19
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
353 TestNetworkPlugins/group/auto/HairPin 0.18
354 TestNetworkPlugins/group/kindnet/NetCatPod 15.37
355 TestNetworkPlugins/group/kindnet/DNS 0.21
356 TestNetworkPlugins/group/kindnet/Localhost 0.16
357 TestNetworkPlugins/group/kindnet/HairPin 0.17
358 TestNetworkPlugins/group/enable-default-cni/Start 99.99
359 TestNetworkPlugins/group/flannel/Start 101.98
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/calico/KubeletFlags 0.27
362 TestNetworkPlugins/group/calico/NetCatPod 15.38
363 TestNetworkPlugins/group/calico/DNS 0.18
364 TestNetworkPlugins/group/calico/Localhost 0.14
365 TestNetworkPlugins/group/calico/HairPin 0.15
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.32
368 TestNetworkPlugins/group/bridge/Start 87.7
369 TestNetworkPlugins/group/custom-flannel/DNS 0.16
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
379 TestNetworkPlugins/group/flannel/NetCatPod 11.31
380 TestNetworkPlugins/group/flannel/DNS 0.15
381 TestNetworkPlugins/group/flannel/Localhost 0.12
382 TestNetworkPlugins/group/flannel/HairPin 0.13
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
384 TestNetworkPlugins/group/bridge/NetCatPod 10.28
385 TestNetworkPlugins/group/bridge/DNS 0.19
386 TestNetworkPlugins/group/bridge/Localhost 0.14
387 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (10.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-617375 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-617375 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.343753354s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0906 23:46:37.357806  133025 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0906 23:46:37.357916  133025 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-617375
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-617375: exit status 85 (66.704913ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-617375 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-617375 │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/06 23:46:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:46:27.060917  133037 out.go:360] Setting OutFile to fd 1 ...
	I0906 23:46:27.061223  133037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:46:27.061234  133037 out.go:374] Setting ErrFile to fd 2...
	I0906 23:46:27.061239  133037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:46:27.061487  133037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	W0906 23:46:27.061653  133037 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21132-128697/.minikube/config/config.json: open /home/jenkins/minikube-integration/21132-128697/.minikube/config/config.json: no such file or directory
	I0906 23:46:27.062530  133037 out.go:368] Setting JSON to true
	I0906 23:46:27.064482  133037 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1730,"bootTime":1757200657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:46:27.064599  133037 start.go:140] virtualization: kvm guest
	I0906 23:46:27.067386  133037 out.go:99] [download-only-617375] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0906 23:46:27.067582  133037 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 23:46:27.067652  133037 notify.go:220] Checking for updates...
	I0906 23:46:27.069516  133037 out.go:171] MINIKUBE_LOCATION=21132
	I0906 23:46:27.071792  133037 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:46:27.073851  133037 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0906 23:46:27.075736  133037 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0906 23:46:27.077272  133037 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0906 23:46:27.079843  133037 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 23:46:27.080281  133037 driver.go:421] Setting default libvirt URI to qemu:///system
	I0906 23:46:27.213368  133037 out.go:99] Using the kvm2 driver based on user configuration
	I0906 23:46:27.213421  133037 start.go:304] selected driver: kvm2
	I0906 23:46:27.213437  133037 start.go:918] validating driver "kvm2" against <nil>
	I0906 23:46:27.213816  133037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:46:27.213952  133037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21132-128697/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0906 23:46:27.219346  133037 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0906 23:46:27.221229  133037 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0906 23:46:27.221378  133037 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:46:27.561497  133037 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0906 23:46:27.562194  133037 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0906 23:46:27.562436  133037 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 23:46:27.562488  133037 cni.go:84] Creating CNI manager for ""
	I0906 23:46:27.562560  133037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 23:46:27.562572  133037 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 23:46:27.562667  133037 start.go:348] cluster config:
	{Name:download-only-617375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-617375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 23:46:27.562893  133037 iso.go:125] acquiring lock: {Name:mk3bd5f7fbe7836651644a94b41f2b6111c9b69d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 23:46:27.565567  133037 out.go:99] Downloading VM boot image ...
	I0906 23:46:27.565626  133037 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21132-128697/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0906 23:46:32.066137  133037 out.go:99] Starting "download-only-617375" primary control-plane node in "download-only-617375" cluster
	I0906 23:46:32.066191  133037 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0906 23:46:32.087556  133037 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0906 23:46:32.087604  133037 cache.go:58] Caching tarball of preloaded images
	I0906 23:46:32.087802  133037 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0906 23:46:32.089795  133037 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0906 23:46:32.089831  133037 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:46:32.113866  133037 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0906 23:46:35.865442  133037 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:46:35.865539  133037 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 23:46:36.760528  133037 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0906 23:46:36.760924  133037 profile.go:143] Saving config to /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/download-only-617375/config.json ...
	I0906 23:46:36.760960  133037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/download-only-617375/config.json: {Name:mkc09a5f6cb2c9a4065f63e0394b83d6496a0c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 23:46:36.761164  133037 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0906 23:46:36.761504  133037 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21132-128697/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-617375 host does not exist
	  To start a cluster, run: "minikube start -p download-only-617375"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-617375
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-045568 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-045568 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.254454086s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0906 23:46:42.097900  133025 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0906 23:46:42.097950  133025 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21132-128697/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-045568
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-045568: exit status 85 (61.730499ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-617375 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-617375 │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │ 06 Sep 25 23:46 UTC │
	│ delete  │ -p download-only-617375                                                                                                                                                 │ download-only-617375 │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │ 06 Sep 25 23:46 UTC │
	│ start   │ -o=json --download-only -p download-only-045568 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-045568 │ jenkins │ v1.36.0 │ 06 Sep 25 23:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/06 23:46:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 23:46:37.887990  133234 out.go:360] Setting OutFile to fd 1 ...
	I0906 23:46:37.888281  133234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:46:37.888290  133234 out.go:374] Setting ErrFile to fd 2...
	I0906 23:46:37.888294  133234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:46:37.888484  133234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0906 23:46:37.889135  133234 out.go:368] Setting JSON to true
	I0906 23:46:37.890003  133234 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1741,"bootTime":1757200657,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:46:37.890116  133234 start.go:140] virtualization: kvm guest
	I0906 23:46:37.892178  133234 out.go:99] [download-only-045568] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:46:37.892372  133234 notify.go:220] Checking for updates...
	I0906 23:46:37.893933  133234 out.go:171] MINIKUBE_LOCATION=21132
	I0906 23:46:37.895522  133234 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:46:37.896899  133234 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0906 23:46:37.898127  133234 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0906 23:46:37.899285  133234 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-045568 host does not exist
	  To start a cluster, run: "minikube start -p download-only-045568"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-045568
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (1.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I0906 23:46:42.714647  133025 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-826525 --alsologtostderr --binary-mirror http://127.0.0.1:35655 --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:314: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-826525 --alsologtostderr --binary-mirror http://127.0.0.1:35655 --driver=kvm2  --container-runtime=crio: (1.373907028s)
helpers_test.go:175: Cleaning up "binary-mirror-826525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-826525
--- PASS: TestBinaryMirror (1.77s)

                                                
                                    
x
+
TestOffline (136.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-348321 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-348321 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (2m15.998527588s)
helpers_test.go:175: Cleaning up "offline-crio-348321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-348321
--- PASS: TestOffline (136.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-331285
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-331285: exit status 85 (186.573093ms)

                                                
                                                
-- stdout --
	* Profile "addons-331285" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331285"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-331285
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-331285: exit status 85 (186.915774ms)

                                                
                                                
-- stdout --
	* Profile "addons-331285" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331285"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (160.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-331285 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-331285 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m40.062161931s)
--- PASS: TestAddons/Setup (160.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-331285 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-331285 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-331285 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-331285 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fd3aa778-0ef7-4c6b-b016-5ecebb8228bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fd3aa778-0ef7-4c6b-b016-5ecebb8228bd] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004166949s
addons_test.go:694: (dbg) Run:  kubectl --context addons-331285 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-331285 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-331285 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.839949ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-rkfxb" [4a6ae9bf-8356-49dc-98bb-c032cbdfad51] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007004175s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7478x" [6c5e8379-9df7-4f8e-8468-524608b8cb71] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004747656s
addons_test.go:392: (dbg) Run:  kubectl --context addons-331285 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-331285 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-331285 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.478225857s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 ip
2025/09/06 23:50:02 [DEBUG] GET http://192.168.39.179:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable registry --alsologtostderr -v=1: (1.029180221s)
--- PASS: TestAddons/parallel/Registry (18.72s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.97s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.586568ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-331285
addons_test.go:332: (dbg) Run:  kubectl --context addons-331285 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.97s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-ptb8b" [45516a04-8aa0-480d-bf07-793b7f0bf255] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005062691s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.207262ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ngqvp" [75137e7c-be7f-4135-ae75-5dfa47025510] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005375236s
addons_test.go:463: (dbg) Run:  kubectl --context addons-331285 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0906 23:49:58.276691  133025 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0906 23:49:58.286556  133025 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0906 23:49:58.286596  133025 kapi.go:107] duration metric: took 9.932208ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.943408ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-331285 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-331285 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [fc9f346f-522a-4b3e-bdec-5f11aa6118df] Pending
helpers_test.go:352: "task-pv-pod" [fc9f346f-522a-4b3e-bdec-5f11aa6118df] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [fc9f346f-522a-4b3e-bdec-5f11aa6118df] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.00523144s
addons_test.go:572: (dbg) Run:  kubectl --context addons-331285 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-331285 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-331285 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-331285 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-331285 delete pod task-pv-pod: (2.366228894s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-331285 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-331285 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-331285 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [91348e8a-fcdd-43fd-b9a5-691c6fda835e] Pending
helpers_test.go:352: "task-pv-pod-restore" [91348e8a-fcdd-43fd-b9a5-691c6fda835e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [91348e8a-fcdd-43fd-b9a5-691c6fda835e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005045085s
addons_test.go:614: (dbg) Run:  kubectl --context addons-331285 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-331285 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-331285 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable volumesnapshots --alsologtostderr -v=1: (1.141503107s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.095023514s)
--- PASS: TestAddons/parallel/CSI (47.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-331285 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-331285 --alsologtostderr -v=1: (1.220747913s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-bqhcb" [8f22cd76-2fac-4b4b-ab94-05c42df4a40f] Pending
helpers_test.go:352: "headlamp-6f46646d79-bqhcb" [8f22cd76-2fac-4b4b-ab94-05c42df4a40f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-bqhcb" [8f22cd76-2fac-4b4b-ab94-05c42df4a40f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.019624834s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable headlamp --alsologtostderr -v=1: (6.33600552s)
--- PASS: TestAddons/parallel/Headlamp (20.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-ldrmn" [fad414e1-d18c-46db-b37a-990dd105fdbb] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005538229s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable cloud-spanner --alsologtostderr -v=1: (1.081953588s)
--- PASS: TestAddons/parallel/CloudSpanner (7.12s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-331285 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-331285 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-331285 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c90e6178-781b-4712-b4e1-886c992d920d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c90e6178-781b-4712-b4e1-886c992d920d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c90e6178-781b-4712-b4e1-886c992d920d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004838691s
addons_test.go:967: (dbg) Run:  kubectl --context addons-331285 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 ssh "cat /opt/local-path-provisioner/pvc-1da79120-da54-4adf-b24c-2d5a1d1dd2da_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-331285 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-331285 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.480581112s)
--- PASS: TestAddons/parallel/LocalPath (53.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-xg7rw" [0d3facd1-ade2-448d-9450-22a49e7f155b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004944102s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cnl4m" [c8fb1679-1d1b-45dc-b858-7071d60bdc2a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005241605s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-331285 addons disable yakd --alsologtostderr -v=1: (6.134838435s)
--- PASS: TestAddons/parallel/Yakd (12.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-331285
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-331285: (1m30.886447881s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-331285
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-331285
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-331285
--- PASS: TestAddons/StoppedEnableDisable (91.20s)

                                                
                                    
x
+
TestCertOptions (66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-794643 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-794643 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m4.564815913s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-794643 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-794643 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-794643 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-794643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-794643
--- PASS: TestCertOptions (66.00s)

                                                
                                    
x
+
TestCertExpiration (333.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-456862 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-456862 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m11.284310073s)
E0907 00:48:40.803512  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-456862 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-456862 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m21.431976892s)
helpers_test.go:175: Cleaning up "cert-expiration-456862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-456862
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-456862: (1.09397142s)
--- PASS: TestCertExpiration (333.81s)

                                                
                                    
x
+
TestForceSystemdFlag (99.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-840597 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-840597 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m38.837961178s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-840597 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-840597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-840597
--- PASS: TestForceSystemdFlag (99.80s)

                                                
                                    
x
+
TestForceSystemdEnv (47.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-445282 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-445282 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.669879246s)
helpers_test.go:175: Cleaning up "force-systemd-env-445282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-445282
--- PASS: TestForceSystemdEnv (47.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0907 00:52:09.181117  133025 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0907 00:52:09.181323  133025 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0907 00:52:09.225737  133025 install.go:62] docker-machine-driver-kvm2: exit status 1
W0907 00:52:09.225971  133025 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0907 00:52:09.226096  133025 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1604672463/001/docker-machine-driver-kvm2
I0907 00:52:09.405344  133025 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1604672463/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000605e70 gz:0xc000605e78 tar:0xc000605e20 tar.bz2:0xc000605e30 tar.gz:0xc000605e40 tar.xz:0xc000605e50 tar.zst:0xc000605e60 tbz2:0xc000605e30 tgz:0xc000605e40 txz:0xc000605e50 tzst:0xc000605e60 xz:0xc000605e80 zip:0xc000605e90 zst:0xc000605e88] Getters:map[file:0xc001732700 http:0xc0008a4870 https:0xc0008a48c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0907 00:52:09.405395  133025 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1604672463/001/docker-machine-driver-kvm2
I0907 00:52:10.099366  133025 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0907 00:52:10.099470  133025 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0907 00:52:10.132092  133025 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0907 00:52:10.132130  133025 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0907 00:52:10.132225  133025 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0907 00:52:10.132261  133025 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1604672463/002/docker-machine-driver-kvm2
I0907 00:52:10.156104  133025 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1604672463/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000605e70 gz:0xc000605e78 tar:0xc000605e20 tar.bz2:0xc000605e30 tar.gz:0xc000605e40 tar.xz:0xc000605e50 tar.zst:0xc000605e60 tbz2:0xc000605e30 tgz:0xc000605e40 txz:0xc000605e50 tzst:0xc000605e60 xz:0xc000605e80 zip:0xc000605e90 zst:0xc000605e88] Getters:map[file:0xc001a22d70 http:0xc000692d20 https:0xc000692d70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0907 00:52:10.156158  133025 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1604672463/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.40s)

                                                
                                    
x
+
TestErrorSpam/setup (44.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-127379 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-127379 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-127379 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-127379 --driver=kvm2  --container-runtime=crio: (44.886551656s)
--- PASS: TestErrorSpam/setup (44.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 unpause
--- PASS: TestErrorSpam/unpause (2.17s)

                                                
                                    
x
+
TestErrorSpam/stop (4.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 stop: (2.375683395s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 stop: (1.233834029s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-127379 --log_dir /tmp/nospam-127379 stop: (1.243635168s)
--- PASS: TestErrorSpam/stop (4.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21132-128697/.minikube/files/etc/test/nested/copy/133025/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445996 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-445996 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m29.502170714s)
--- PASS: TestFunctional/serial/StartWithProxy (89.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (57.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0906 23:56:51.350853  133025 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445996 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-445996 --alsologtostderr -v=8: (57.987487815s)
functional_test.go:678: soft start took 57.988342781s for "functional-445996" cluster.
I0906 23:57:49.338725  133025 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (57.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-445996 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 cache add registry.k8s.io/pause:3.1: (1.097660101s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 cache add registry.k8s.io/pause:3.3: (1.12594295s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 cache add registry.k8s.io/pause:latest: (1.139765096s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-445996 /tmp/TestFunctionalserialCacheCmdcacheadd_local3023380172/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cache add minikube-local-cache-test:functional-445996
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cache delete minikube-local-cache-test:functional-445996
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-445996
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (239.89879ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 cache reload: (1.023346505s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 kubectl -- --context functional-445996 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-445996 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445996 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-445996 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.226829745s)
functional_test.go:776: restart took 36.227003819s for "functional-445996" cluster.
I0906 23:58:32.676561  133025 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (36.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-445996 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 logs: (1.582356618s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 logs --file /tmp/TestFunctionalserialLogsFileCmd539744389/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 logs --file /tmp/TestFunctionalserialLogsFileCmd539744389/001/logs.txt: (1.587545862s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-445996 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-445996
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-445996: exit status 115 (338.008075ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.88:32103 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-445996 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 config get cpus: exit status 14 (77.742066ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 config get cpus: exit status 14 (69.356558ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-445996 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-445996 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 141373: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445996 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-445996 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.528781ms)

                                                
                                                
-- stdout --
	* [functional-445996] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:59:04.510352  141218 out.go:360] Setting OutFile to fd 1 ...
	I0906 23:59:04.510582  141218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:59:04.510591  141218 out.go:374] Setting ErrFile to fd 2...
	I0906 23:59:04.510595  141218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:59:04.510800  141218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0906 23:59:04.511354  141218 out.go:368] Setting JSON to false
	I0906 23:59:04.512281  141218 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2488,"bootTime":1757200657,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:59:04.512392  141218 start.go:140] virtualization: kvm guest
	I0906 23:59:04.514181  141218 out.go:179] * [functional-445996] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 23:59:04.515325  141218 notify.go:220] Checking for updates...
	I0906 23:59:04.515355  141218 out.go:179]   - MINIKUBE_LOCATION=21132
	I0906 23:59:04.516616  141218 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:59:04.517878  141218 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0906 23:59:04.519114  141218 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0906 23:59:04.520255  141218 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:59:04.521372  141218 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:59:04.522936  141218 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0906 23:59:04.523465  141218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:59:04.523546  141218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:59:04.540307  141218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0906 23:59:04.540808  141218 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:59:04.541336  141218 main.go:141] libmachine: Using API Version  1
	I0906 23:59:04.541357  141218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:59:04.541734  141218 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:59:04.541931  141218 main.go:141] libmachine: (functional-445996) Calling .DriverName
	I0906 23:59:04.542225  141218 driver.go:421] Setting default libvirt URI to qemu:///system
	I0906 23:59:04.542536  141218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:59:04.542586  141218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:59:04.558874  141218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I0906 23:59:04.559385  141218 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:59:04.559875  141218 main.go:141] libmachine: Using API Version  1
	I0906 23:59:04.559900  141218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:59:04.560273  141218 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:59:04.560471  141218 main.go:141] libmachine: (functional-445996) Calling .DriverName
	I0906 23:59:04.602582  141218 out.go:179] * Using the kvm2 driver based on existing profile
	I0906 23:59:04.603723  141218 start.go:304] selected driver: kvm2
	I0906 23:59:04.603744  141218 start.go:918] validating driver "kvm2" against &{Name:functional-445996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-445996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 23:59:04.603939  141218 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:59:04.606756  141218 out.go:203] 
	W0906 23:59:04.608067  141218 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 23:59:04.609199  141218 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445996 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-445996 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-445996 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.787102ms)

                                                
                                                
-- stdout --
	* [functional-445996] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 23:59:04.816731  141288 out.go:360] Setting OutFile to fd 1 ...
	I0906 23:59:04.816862  141288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:59:04.816870  141288 out.go:374] Setting ErrFile to fd 2...
	I0906 23:59:04.816874  141288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0906 23:59:04.817189  141288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0906 23:59:04.817735  141288 out.go:368] Setting JSON to false
	I0906 23:59:04.818732  141288 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2488,"bootTime":1757200657,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 23:59:04.818834  141288 start.go:140] virtualization: kvm guest
	I0906 23:59:04.820437  141288 out.go:179] * [functional-445996] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0906 23:59:04.821609  141288 out.go:179]   - MINIKUBE_LOCATION=21132
	I0906 23:59:04.821614  141288 notify.go:220] Checking for updates...
	I0906 23:59:04.822877  141288 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 23:59:04.824266  141288 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0906 23:59:04.825522  141288 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0906 23:59:04.826612  141288 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 23:59:04.827667  141288 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 23:59:04.829135  141288 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0906 23:59:04.829625  141288 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:59:04.829690  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:59:04.849088  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0906 23:59:04.849499  141288 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:59:04.850061  141288 main.go:141] libmachine: Using API Version  1
	I0906 23:59:04.850100  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:59:04.850528  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:59:04.850747  141288 main.go:141] libmachine: (functional-445996) Calling .DriverName
	I0906 23:59:04.851012  141288 driver.go:421] Setting default libvirt URI to qemu:///system
	I0906 23:59:04.851341  141288 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0906 23:59:04.851386  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 23:59:04.868006  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0906 23:59:04.868586  141288 main.go:141] libmachine: () Calling .GetVersion
	I0906 23:59:04.869139  141288 main.go:141] libmachine: Using API Version  1
	I0906 23:59:04.869162  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 23:59:04.869552  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0906 23:59:04.869760  141288 main.go:141] libmachine: (functional-445996) Calling .DriverName
	I0906 23:59:04.906181  141288 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0906 23:59:04.907272  141288 start.go:304] selected driver: kvm2
	I0906 23:59:04.907286  141288 start.go:918] validating driver "kvm2" against &{Name:functional-445996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-445996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 23:59:04.907399  141288 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 23:59:04.909340  141288 out.go:203] 
	W0906 23:59:04.910466  141288 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 23:59:04.911561  141288 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-445996 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-445996 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9njbb" [992f7df2-9a8a-4f16-9142-1b0714026eb6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-9njbb" [992f7df2-9a8a-4f16-9142-1b0714026eb6] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003156311s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.88:31363
functional_test.go:1680: http://192.168.39.88:31363: success! body:
Request served by hello-node-connect-7d85dfc575-9njbb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.88:31363
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [17a38a40-c17f-4a98-aa40-e0b7c0c51233] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0081548s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-445996 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-445996 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-445996 get pvc myclaim -o=json
I0906 23:58:49.127563  133025 retry.go:31] will retry after 2.480168512s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:920a7e75-6300-4d64-9fd0-5f41077ab9b3 ResourceVersion:694 Generation:0 CreationTimestamp:2025-09-06 23:58:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001cef1f0 VolumeMode:0xc001cef200 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-445996 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-445996 apply -f testdata/storage-provisioner/pod.yaml
I0906 23:58:51.828984  133025 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a3753e63-164a-4a4b-aa25-fc03fe74ac0b] Pending
helpers_test.go:352: "sp-pod" [a3753e63-164a-4a4b-aa25-fc03fe74ac0b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a3753e63-164a-4a4b-aa25-fc03fe74ac0b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005513155s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-445996 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-445996 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-445996 apply -f testdata/storage-provisioner/pod.yaml
I0906 23:59:06.969180  133025 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7f0813bf-5bd0-4911-80fd-982290bef967] Pending
helpers_test.go:352: "sp-pod" [7f0813bf-5bd0-4911-80fd-982290bef967] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7f0813bf-5bd0-4911-80fd-982290bef967] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00431826s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-445996 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh -n functional-445996 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cp functional-445996:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2737280264/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh -n functional-445996 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh -n functional-445996 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-445996 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-7tdrt" [05a4374e-622a-4abc-ab6a-04999426b55d] Pending
helpers_test.go:352: "mysql-5bb876957f-7tdrt" [05a4374e-622a-4abc-ab6a-04999426b55d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-7tdrt" [05a4374e-622a-4abc-ab6a-04999426b55d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.023052006s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-445996 exec mysql-5bb876957f-7tdrt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-445996 exec mysql-5bb876957f-7tdrt -- mysql -ppassword -e "show databases;": exit status 1 (325.483972ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0906 23:58:59.151814  133025 retry.go:31] will retry after 803.939137ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-445996 exec mysql-5bb876957f-7tdrt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-445996 exec mysql-5bb876957f-7tdrt -- mysql -ppassword -e "show databases;": exit status 1 (160.404414ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0906 23:59:00.117435  133025 retry.go:31] will retry after 2.062425658s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-445996 exec mysql-5bb876957f-7tdrt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/133025/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo cat /etc/test/nested/copy/133025/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/133025.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo cat /etc/ssl/certs/133025.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/133025.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo cat /usr/share/ca-certificates/133025.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1330252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo cat /etc/ssl/certs/1330252.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1330252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo cat /usr/share/ca-certificates/1330252.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-445996 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh "sudo systemctl is-active docker": exit status 1 (316.729544ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh "sudo systemctl is-active containerd": exit status 1 (276.049849ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 image ls --format short --alsologtostderr: (1.330102067s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445996 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-445996
localhost/kicbase/echo-server:functional-445996
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445996 image ls --format short --alsologtostderr:
I0906 23:59:11.105772  141550 out.go:360] Setting OutFile to fd 1 ...
I0906 23:59:11.105943  141550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:11.105958  141550 out.go:374] Setting ErrFile to fd 2...
I0906 23:59:11.105965  141550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:11.106352  141550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
I0906 23:59:11.107375  141550 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:11.107540  141550 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:11.108158  141550 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:11.108241  141550 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:11.126113  141550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
I0906 23:59:11.126680  141550 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:11.127258  141550 main.go:141] libmachine: Using API Version  1
I0906 23:59:11.127288  141550 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:11.127690  141550 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:11.127919  141550 main.go:141] libmachine: (functional-445996) Calling .GetState
I0906 23:59:11.130017  141550 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:11.130079  141550 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:11.147272  141550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
I0906 23:59:11.147895  141550 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:11.148506  141550 main.go:141] libmachine: Using API Version  1
I0906 23:59:11.148536  141550 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:11.148928  141550 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:11.149145  141550 main.go:141] libmachine: (functional-445996) Calling .DriverName
I0906 23:59:11.149385  141550 ssh_runner.go:195] Run: systemctl --version
I0906 23:59:11.149424  141550 main.go:141] libmachine: (functional-445996) Calling .GetSSHHostname
I0906 23:59:11.152836  141550 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:11.153351  141550 main.go:141] libmachine: (functional-445996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ff:1e", ip: ""} in network mk-functional-445996: {Iface:virbr1 ExpiryTime:2025-09-07 00:55:37 +0000 UTC Type:0 Mac:52:54:00:e1:ff:1e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-445996 Clientid:01:52:54:00:e1:ff:1e}
I0906 23:59:11.153387  141550 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined IP address 192.168.39.88 and MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:11.153551  141550 main.go:141] libmachine: (functional-445996) Calling .GetSSHPort
I0906 23:59:11.153732  141550 main.go:141] libmachine: (functional-445996) Calling .GetSSHKeyPath
I0906 23:59:11.153858  141550 main.go:141] libmachine: (functional-445996) Calling .GetSSHUsername
I0906 23:59:11.154112  141550 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/functional-445996/id_rsa Username:docker}
I0906 23:59:11.262644  141550 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:59:12.369939  141550 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.107244544s)
I0906 23:59:12.370441  141550 main.go:141] libmachine: Making call to close driver server
I0906 23:59:12.370460  141550 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:12.370817  141550 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:12.370835  141550 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:59:12.370845  141550 main.go:141] libmachine: Making call to close driver server
I0906 23:59:12.370853  141550 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:12.371192  141550 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:12.371205  141550 main.go:141] libmachine: (functional-445996) DBG | Closing plugin on server side
I0906 23:59:12.371211  141550 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445996 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-445996  │ ffe9d25b355ea │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-445996  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445996 image ls --format table --alsologtostderr:
I0906 23:59:14.600420  141846 out.go:360] Setting OutFile to fd 1 ...
I0906 23:59:14.600900  141846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:14.600914  141846 out.go:374] Setting ErrFile to fd 2...
I0906 23:59:14.600921  141846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:14.601410  141846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
I0906 23:59:14.602873  141846 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:14.603041  141846 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:14.603447  141846 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:14.603500  141846 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:14.621341  141846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
I0906 23:59:14.622037  141846 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:14.622674  141846 main.go:141] libmachine: Using API Version  1
I0906 23:59:14.622733  141846 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:14.623169  141846 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:14.623403  141846 main.go:141] libmachine: (functional-445996) Calling .GetState
I0906 23:59:14.625636  141846 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:14.625690  141846 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:14.643228  141846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
I0906 23:59:14.643781  141846 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:14.644293  141846 main.go:141] libmachine: Using API Version  1
I0906 23:59:14.644317  141846 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:14.644676  141846 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:14.644863  141846 main.go:141] libmachine: (functional-445996) Calling .DriverName
I0906 23:59:14.645092  141846 ssh_runner.go:195] Run: systemctl --version
I0906 23:59:14.645118  141846 main.go:141] libmachine: (functional-445996) Calling .GetSSHHostname
I0906 23:59:14.648342  141846 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:14.648948  141846 main.go:141] libmachine: (functional-445996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ff:1e", ip: ""} in network mk-functional-445996: {Iface:virbr1 ExpiryTime:2025-09-07 00:55:37 +0000 UTC Type:0 Mac:52:54:00:e1:ff:1e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-445996 Clientid:01:52:54:00:e1:ff:1e}
I0906 23:59:14.648981  141846 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined IP address 192.168.39.88 and MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:14.649265  141846 main.go:141] libmachine: (functional-445996) Calling .GetSSHPort
I0906 23:59:14.649464  141846 main.go:141] libmachine: (functional-445996) Calling .GetSSHKeyPath
I0906 23:59:14.649612  141846 main.go:141] libmachine: (functional-445996) Calling .GetSSHUsername
I0906 23:59:14.649747  141846 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/functional-445996/id_rsa Username:docker}
I0906 23:59:14.751611  141846 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:59:14.802024  141846 main.go:141] libmachine: Making call to close driver server
I0906 23:59:14.802050  141846 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:14.802430  141846 main.go:141] libmachine: (functional-445996) DBG | Closing plugin on server side
I0906 23:59:14.802459  141846 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:14.802475  141846 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:59:14.802491  141846 main.go:141] libmachine: Making call to close driver server
I0906 23:59:14.802499  141846 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:14.802779  141846 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:14.802802  141846 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:59:14.802802  141846 main.go:141] libmachine: (functional-445996) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445996 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["regis
try.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":[
"registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985
f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.
28.4-glibc"],"size":"4631262"},{"id":"ffe9d25b355ea5f8f6c28524f8320fa397225cc9304d776498dcadbea64e6b51","repoDigests":["localhost/minikube-local-cache-test@sha256:e9ffef205aa71a8bd1d64d08bc8e952f4452e7360e89836455a4a27b1f5f7b5e"],"repoTags":["localhost/minikube-local-cache-test:functional-445996"],"size":"3330"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-445996"],"size":"4943877"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787
b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445996 image ls --format json --alsologtostderr:
I0906 23:59:14.338150  141822 out.go:360] Setting OutFile to fd 1 ...
I0906 23:59:14.338435  141822 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:14.338447  141822 out.go:374] Setting ErrFile to fd 2...
I0906 23:59:14.338454  141822 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:14.338675  141822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
I0906 23:59:14.339297  141822 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:14.339414  141822 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:14.339795  141822 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:14.339864  141822 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:14.361928  141822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35449
I0906 23:59:14.362574  141822 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:14.363247  141822 main.go:141] libmachine: Using API Version  1
I0906 23:59:14.363281  141822 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:14.363784  141822 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:14.363987  141822 main.go:141] libmachine: (functional-445996) Calling .GetState
I0906 23:59:14.366292  141822 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:14.366347  141822 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:14.383223  141822 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
I0906 23:59:14.383786  141822 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:14.384380  141822 main.go:141] libmachine: Using API Version  1
I0906 23:59:14.384412  141822 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:14.384732  141822 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:14.384948  141822 main.go:141] libmachine: (functional-445996) Calling .DriverName
I0906 23:59:14.385145  141822 ssh_runner.go:195] Run: systemctl --version
I0906 23:59:14.385178  141822 main.go:141] libmachine: (functional-445996) Calling .GetSSHHostname
I0906 23:59:14.388677  141822 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:14.389149  141822 main.go:141] libmachine: (functional-445996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ff:1e", ip: ""} in network mk-functional-445996: {Iface:virbr1 ExpiryTime:2025-09-07 00:55:37 +0000 UTC Type:0 Mac:52:54:00:e1:ff:1e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-445996 Clientid:01:52:54:00:e1:ff:1e}
I0906 23:59:14.389179  141822 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined IP address 192.168.39.88 and MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:14.389484  141822 main.go:141] libmachine: (functional-445996) Calling .GetSSHPort
I0906 23:59:14.389746  141822 main.go:141] libmachine: (functional-445996) Calling .GetSSHKeyPath
I0906 23:59:14.389954  141822 main.go:141] libmachine: (functional-445996) Calling .GetSSHUsername
I0906 23:59:14.390132  141822 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/functional-445996/id_rsa Username:docker}
I0906 23:59:14.496553  141822 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:59:14.540900  141822 main.go:141] libmachine: Making call to close driver server
I0906 23:59:14.540918  141822 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:14.541244  141822 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:14.541261  141822 main.go:141] libmachine: (functional-445996) DBG | Closing plugin on server side
I0906 23:59:14.541267  141822 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:59:14.541281  141822 main.go:141] libmachine: Making call to close driver server
I0906 23:59:14.541289  141822 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:14.541523  141822 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:14.541537  141822 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445996 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffe9d25b355ea5f8f6c28524f8320fa397225cc9304d776498dcadbea64e6b51
repoDigests:
- localhost/minikube-local-cache-test@sha256:e9ffef205aa71a8bd1d64d08bc8e952f4452e7360e89836455a4a27b1f5f7b5e
repoTags:
- localhost/minikube-local-cache-test:functional-445996
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-445996
size: "4943877"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445996 image ls --format yaml --alsologtostderr:
I0906 23:59:12.434652  141573 out.go:360] Setting OutFile to fd 1 ...
I0906 23:59:12.434926  141573 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:12.434936  141573 out.go:374] Setting ErrFile to fd 2...
I0906 23:59:12.434940  141573 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:12.435178  141573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
I0906 23:59:12.435770  141573 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:12.435866  141573 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:12.436241  141573 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:12.436315  141573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:12.452880  141573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37449
I0906 23:59:12.453457  141573 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:12.454047  141573 main.go:141] libmachine: Using API Version  1
I0906 23:59:12.454067  141573 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:12.454517  141573 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:12.454721  141573 main.go:141] libmachine: (functional-445996) Calling .GetState
I0906 23:59:12.456967  141573 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:12.457026  141573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:12.477372  141573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45471
I0906 23:59:12.477923  141573 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:12.478504  141573 main.go:141] libmachine: Using API Version  1
I0906 23:59:12.478539  141573 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:12.478955  141573 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:12.479180  141573 main.go:141] libmachine: (functional-445996) Calling .DriverName
I0906 23:59:12.479465  141573 ssh_runner.go:195] Run: systemctl --version
I0906 23:59:12.479496  141573 main.go:141] libmachine: (functional-445996) Calling .GetSSHHostname
I0906 23:59:12.482763  141573 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:12.483231  141573 main.go:141] libmachine: (functional-445996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ff:1e", ip: ""} in network mk-functional-445996: {Iface:virbr1 ExpiryTime:2025-09-07 00:55:37 +0000 UTC Type:0 Mac:52:54:00:e1:ff:1e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-445996 Clientid:01:52:54:00:e1:ff:1e}
I0906 23:59:12.483261  141573 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined IP address 192.168.39.88 and MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:12.483435  141573 main.go:141] libmachine: (functional-445996) Calling .GetSSHPort
I0906 23:59:12.483664  141573 main.go:141] libmachine: (functional-445996) Calling .GetSSHKeyPath
I0906 23:59:12.483854  141573 main.go:141] libmachine: (functional-445996) Calling .GetSSHUsername
I0906 23:59:12.484021  141573 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/functional-445996/id_rsa Username:docker}
I0906 23:59:12.609923  141573 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:59:12.688337  141573 main.go:141] libmachine: Making call to close driver server
I0906 23:59:12.688351  141573 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:12.688695  141573 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:12.688715  141573 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:59:12.688728  141573 main.go:141] libmachine: Making call to close driver server
I0906 23:59:12.688737  141573 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:12.689172  141573 main.go:141] libmachine: (functional-445996) DBG | Closing plugin on server side
I0906 23:59:12.689194  141573 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:12.689209  141573 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh pgrep buildkitd: exit status 1 (313.703244ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image build -t localhost/my-image:functional-445996 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 image build -t localhost/my-image:functional-445996 testdata/build --alsologtostderr: (3.212886903s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-445996 image build -t localhost/my-image:functional-445996 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a331070d187
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-445996
--> c1d0a0fe83b
Successfully tagged localhost/my-image:functional-445996
c1d0a0fe83bf5c655a97268aa2ee9b44ff42c7badb6c89a8bf3444bd26b8a0d8
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-445996 image build -t localhost/my-image:functional-445996 testdata/build --alsologtostderr:
I0906 23:59:13.068552  141650 out.go:360] Setting OutFile to fd 1 ...
I0906 23:59:13.068732  141650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:13.068770  141650 out.go:374] Setting ErrFile to fd 2...
I0906 23:59:13.068777  141650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0906 23:59:13.069130  141650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
I0906 23:59:13.070009  141650 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:13.071104  141650 config.go:182] Loaded profile config "functional-445996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0906 23:59:13.071684  141650 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:13.071760  141650 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:13.091204  141650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
I0906 23:59:13.091943  141650 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:13.092710  141650 main.go:141] libmachine: Using API Version  1
I0906 23:59:13.092778  141650 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:13.093203  141650 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:13.093493  141650 main.go:141] libmachine: (functional-445996) Calling .GetState
I0906 23:59:13.095584  141650 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
I0906 23:59:13.095657  141650 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 23:59:13.113017  141650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35405
I0906 23:59:13.113618  141650 main.go:141] libmachine: () Calling .GetVersion
I0906 23:59:13.114283  141650 main.go:141] libmachine: Using API Version  1
I0906 23:59:13.114314  141650 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 23:59:13.114677  141650 main.go:141] libmachine: () Calling .GetMachineName
I0906 23:59:13.114950  141650 main.go:141] libmachine: (functional-445996) Calling .DriverName
I0906 23:59:13.115206  141650 ssh_runner.go:195] Run: systemctl --version
I0906 23:59:13.115237  141650 main.go:141] libmachine: (functional-445996) Calling .GetSSHHostname
I0906 23:59:13.118755  141650 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:13.119225  141650 main.go:141] libmachine: (functional-445996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:ff:1e", ip: ""} in network mk-functional-445996: {Iface:virbr1 ExpiryTime:2025-09-07 00:55:37 +0000 UTC Type:0 Mac:52:54:00:e1:ff:1e Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:functional-445996 Clientid:01:52:54:00:e1:ff:1e}
I0906 23:59:13.119257  141650 main.go:141] libmachine: (functional-445996) DBG | domain functional-445996 has defined IP address 192.168.39.88 and MAC address 52:54:00:e1:ff:1e in network mk-functional-445996
I0906 23:59:13.119492  141650 main.go:141] libmachine: (functional-445996) Calling .GetSSHPort
I0906 23:59:13.119709  141650 main.go:141] libmachine: (functional-445996) Calling .GetSSHKeyPath
I0906 23:59:13.119899  141650 main.go:141] libmachine: (functional-445996) Calling .GetSSHUsername
I0906 23:59:13.120135  141650 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/functional-445996/id_rsa Username:docker}
I0906 23:59:13.220273  141650 build_images.go:161] Building image from path: /tmp/build.4274175110.tar
I0906 23:59:13.220353  141650 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 23:59:13.249735  141650 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4274175110.tar
I0906 23:59:13.258410  141650 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4274175110.tar: stat -c "%s %y" /var/lib/minikube/build/build.4274175110.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4274175110.tar': No such file or directory
I0906 23:59:13.258452  141650 ssh_runner.go:362] scp /tmp/build.4274175110.tar --> /var/lib/minikube/build/build.4274175110.tar (3072 bytes)
I0906 23:59:13.317118  141650 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4274175110
I0906 23:59:13.339648  141650 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4274175110 -xf /var/lib/minikube/build/build.4274175110.tar
I0906 23:59:13.364155  141650 crio.go:315] Building image: /var/lib/minikube/build/build.4274175110
I0906 23:59:13.364245  141650 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-445996 /var/lib/minikube/build/build.4274175110 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0906 23:59:16.186996  141650 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-445996 /var/lib/minikube/build/build.4274175110 --cgroup-manager=cgroupfs: (2.822721555s)
I0906 23:59:16.187082  141650 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4274175110
I0906 23:59:16.201337  141650 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4274175110.tar
I0906 23:59:16.214571  141650 build_images.go:217] Built localhost/my-image:functional-445996 from /tmp/build.4274175110.tar
I0906 23:59:16.214619  141650 build_images.go:133] succeeded building to: functional-445996
I0906 23:59:16.214626  141650 build_images.go:134] failed building to: 
I0906 23:59:16.214663  141650 main.go:141] libmachine: Making call to close driver server
I0906 23:59:16.214679  141650 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:16.214993  141650 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:16.215012  141650 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:59:16.215024  141650 main.go:141] libmachine: Making call to close driver server
I0906 23:59:16.215032  141650 main.go:141] libmachine: (functional-445996) Calling .Close
I0906 23:59:16.215038  141650 main.go:141] libmachine: (functional-445996) DBG | Closing plugin on server side
I0906 23:59:16.215265  141650 main.go:141] libmachine: Successfully made call to close driver server
I0906 23:59:16.215300  141650 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 23:59:16.215399  141650 main.go:141] libmachine: (functional-445996) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-445996
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-445996 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-445996 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-5h9v2" [38498a77-6313-4e5e-afe4-a454f55f0ce8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-5h9v2" [38498a77-6313-4e5e-afe4-a454f55f0ce8] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.004094498s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image load --daemon kicbase/echo-server:functional-445996 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 image load --daemon kicbase/echo-server:functional-445996 --alsologtostderr: (2.782067301s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image load --daemon kicbase/echo-server:functional-445996 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 image load --daemon kicbase/echo-server:functional-445996 --alsologtostderr: (7.468161281s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-445996
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image load --daemon kicbase/echo-server:functional-445996 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image save kicbase/echo-server:functional-445996 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image rm kicbase/echo-server:functional-445996 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-445996 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.309658306s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-445996
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 image save --daemon kicbase/echo-server:functional-445996 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-445996
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 service list -o json
functional_test.go:1504: Took "542.834283ms" to run "out/minikube-linux-amd64 -p functional-445996 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.88:32379
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.88:32379
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdany-port3232014765/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757203143313483127" to /tmp/TestFunctionalparallelMountCmdany-port3232014765/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757203143313483127" to /tmp/TestFunctionalparallelMountCmdany-port3232014765/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757203143313483127" to /tmp/TestFunctionalparallelMountCmdany-port3232014765/001/test-1757203143313483127
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.19082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0906 23:59:03.555024  133025 retry.go:31] will retry after 348.675765ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 23:59 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 23:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 23:59 test-1757203143313483127
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh cat /mount-9p/test-1757203143313483127
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-445996 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ffd4c0e9-1be0-4cce-9f6a-e0e9cc9407be] Pending
helpers_test.go:352: "busybox-mount" [ffd4c0e9-1be0-4cce-9f6a-e0e9cc9407be] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ffd4c0e9-1be0-4cce-9f6a-e0e9cc9407be] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ffd4c0e9-1be0-4cce-9f6a-e0e9cc9407be] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.010536578s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-445996 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdany-port3232014765/001:/mount-9p --alsologtostderr -v=1] ...
2025/09/06 23:59:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "306.279806ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.665336ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "338.373555ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.827686ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdspecific-port2609735982/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.075627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0906 23:59:14.315847  133025 retry.go:31] will retry after 280.723182ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdspecific-port2609735982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh "sudo umount -f /mount-9p": exit status 1 (227.553003ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-445996 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdspecific-port2609735982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3688807074/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3688807074/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3688807074/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T" /mount1: exit status 1 (308.783156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0906 23:59:16.067020  133025 retry.go:31] will retry after 647.861573ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-445996 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-445996 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3688807074/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3688807074/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-445996 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3688807074/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-445996
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-445996
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-445996
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (242.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0906 23:59:25.452031  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:25.458572  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:25.470081  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:25.491562  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:25.533120  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:25.614682  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:25.776275  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:26.098460  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:26.740631  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:28.022874  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:30.585089  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:35.706619  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0906 23:59:45.948492  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:00:06.430554  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:00:47.392613  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:02:09.315057  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m1.71947538s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (242.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 kubectl -- rollout status deployment/busybox: (3.856322555s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-4thdp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-fgwcp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-n9qwm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-4thdp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-fgwcp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-n9qwm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-4thdp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-fgwcp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-n9qwm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-4thdp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-4thdp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-fgwcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-fgwcp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-n9qwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 kubectl -- exec busybox-7b57f96db7-n9qwm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 node add --alsologtostderr -v 5
E0907 00:03:40.803624  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:40.810182  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:40.821669  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:40.843136  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:40.884661  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:40.966262  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:41.127825  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:41.449576  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:42.091699  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:43.374049  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:45.936131  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:03:51.058536  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:04:01.300064  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:04:21.781832  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 node add --alsologtostderr -v 5: (50.489580155s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-367988 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --output json --alsologtostderr -v 5
E0907 00:04:25.452902  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp testdata/cp-test.txt ha-367988:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile461812374/001/cp-test_ha-367988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988:/home/docker/cp-test.txt ha-367988-m02:/home/docker/cp-test_ha-367988_ha-367988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test_ha-367988_ha-367988-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988:/home/docker/cp-test.txt ha-367988-m03:/home/docker/cp-test_ha-367988_ha-367988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test_ha-367988_ha-367988-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988:/home/docker/cp-test.txt ha-367988-m04:/home/docker/cp-test_ha-367988_ha-367988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test_ha-367988_ha-367988-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp testdata/cp-test.txt ha-367988-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile461812374/001/cp-test_ha-367988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m02:/home/docker/cp-test.txt ha-367988:/home/docker/cp-test_ha-367988-m02_ha-367988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test_ha-367988-m02_ha-367988.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m02:/home/docker/cp-test.txt ha-367988-m03:/home/docker/cp-test_ha-367988-m02_ha-367988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test_ha-367988-m02_ha-367988-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m02:/home/docker/cp-test.txt ha-367988-m04:/home/docker/cp-test_ha-367988-m02_ha-367988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test_ha-367988-m02_ha-367988-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp testdata/cp-test.txt ha-367988-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile461812374/001/cp-test_ha-367988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m03:/home/docker/cp-test.txt ha-367988:/home/docker/cp-test_ha-367988-m03_ha-367988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test_ha-367988-m03_ha-367988.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m03:/home/docker/cp-test.txt ha-367988-m02:/home/docker/cp-test_ha-367988-m03_ha-367988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test_ha-367988-m03_ha-367988-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m03:/home/docker/cp-test.txt ha-367988-m04:/home/docker/cp-test_ha-367988-m03_ha-367988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test_ha-367988-m03_ha-367988-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp testdata/cp-test.txt ha-367988-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile461812374/001/cp-test_ha-367988-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m04:/home/docker/cp-test.txt ha-367988:/home/docker/cp-test_ha-367988-m04_ha-367988.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988 "sudo cat /home/docker/cp-test_ha-367988-m04_ha-367988.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m04:/home/docker/cp-test.txt ha-367988-m02:/home/docker/cp-test_ha-367988-m04_ha-367988-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m02 "sudo cat /home/docker/cp-test_ha-367988-m04_ha-367988-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 cp ha-367988-m04:/home/docker/cp-test.txt ha-367988-m03:/home/docker/cp-test_ha-367988-m04_ha-367988-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 ssh -n ha-367988-m03 "sudo cat /home/docker/cp-test_ha-367988-m04_ha-367988-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 node stop m02 --alsologtostderr -v 5
E0907 00:04:53.157146  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:05:02.744093  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 node stop m02 --alsologtostderr -v 5: (1m30.877029104s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5: exit status 7 (737.64812ms)

                                                
                                                
-- stdout --
	ha-367988
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-367988-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367988-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-367988-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:06:09.949774  146790 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:06:09.950070  146790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:06:09.950079  146790 out.go:374] Setting ErrFile to fd 2...
	I0907 00:06:09.950084  146790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:06:09.950331  146790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:06:09.950575  146790 out.go:368] Setting JSON to false
	I0907 00:06:09.950612  146790 mustload.go:65] Loading cluster: ha-367988
	I0907 00:06:09.950652  146790 notify.go:220] Checking for updates...
	I0907 00:06:09.951108  146790 config.go:182] Loaded profile config "ha-367988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:06:09.951139  146790 status.go:174] checking status of ha-367988 ...
	I0907 00:06:09.951594  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:09.951646  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:09.970086  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0907 00:06:09.970640  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:09.971407  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:09.971479  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:09.971973  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:09.972214  146790 main.go:141] libmachine: (ha-367988) Calling .GetState
	I0907 00:06:09.974200  146790 status.go:371] ha-367988 host status = "Running" (err=<nil>)
	I0907 00:06:09.974222  146790 host.go:66] Checking if "ha-367988" exists ...
	I0907 00:06:09.974634  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:09.974689  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:09.991107  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0907 00:06:09.991593  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:09.992122  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:09.992145  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:09.992514  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:09.992729  146790 main.go:141] libmachine: (ha-367988) Calling .GetIP
	I0907 00:06:09.996590  146790 main.go:141] libmachine: (ha-367988) DBG | domain ha-367988 has defined MAC address 52:54:00:da:67:25 in network mk-ha-367988
	I0907 00:06:09.997204  146790 main.go:141] libmachine: (ha-367988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:67:25", ip: ""} in network mk-ha-367988: {Iface:virbr1 ExpiryTime:2025-09-07 00:59:37 +0000 UTC Type:0 Mac:52:54:00:da:67:25 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-367988 Clientid:01:52:54:00:da:67:25}
	I0907 00:06:09.997244  146790 main.go:141] libmachine: (ha-367988) DBG | domain ha-367988 has defined IP address 192.168.39.120 and MAC address 52:54:00:da:67:25 in network mk-ha-367988
	I0907 00:06:09.997397  146790 host.go:66] Checking if "ha-367988" exists ...
	I0907 00:06:09.997700  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:09.997759  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.015501  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
	I0907 00:06:10.016162  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.016810  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.016840  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.017314  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.017521  146790 main.go:141] libmachine: (ha-367988) Calling .DriverName
	I0907 00:06:10.017813  146790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:06:10.017859  146790 main.go:141] libmachine: (ha-367988) Calling .GetSSHHostname
	I0907 00:06:10.021382  146790 main.go:141] libmachine: (ha-367988) DBG | domain ha-367988 has defined MAC address 52:54:00:da:67:25 in network mk-ha-367988
	I0907 00:06:10.021954  146790 main.go:141] libmachine: (ha-367988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:67:25", ip: ""} in network mk-ha-367988: {Iface:virbr1 ExpiryTime:2025-09-07 00:59:37 +0000 UTC Type:0 Mac:52:54:00:da:67:25 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-367988 Clientid:01:52:54:00:da:67:25}
	I0907 00:06:10.021990  146790 main.go:141] libmachine: (ha-367988) DBG | domain ha-367988 has defined IP address 192.168.39.120 and MAC address 52:54:00:da:67:25 in network mk-ha-367988
	I0907 00:06:10.022100  146790 main.go:141] libmachine: (ha-367988) Calling .GetSSHPort
	I0907 00:06:10.022328  146790 main.go:141] libmachine: (ha-367988) Calling .GetSSHKeyPath
	I0907 00:06:10.022500  146790 main.go:141] libmachine: (ha-367988) Calling .GetSSHUsername
	I0907 00:06:10.022675  146790 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/ha-367988/id_rsa Username:docker}
	I0907 00:06:10.122542  146790 ssh_runner.go:195] Run: systemctl --version
	I0907 00:06:10.129753  146790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:06:10.151534  146790 kubeconfig.go:125] found "ha-367988" server: "https://192.168.39.254:8443"
	I0907 00:06:10.151602  146790 api_server.go:166] Checking apiserver status ...
	I0907 00:06:10.151660  146790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:06:10.175559  146790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	W0907 00:06:10.187720  146790 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:06:10.187790  146790 ssh_runner.go:195] Run: ls
	I0907 00:06:10.193437  146790 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0907 00:06:10.199086  146790 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0907 00:06:10.199119  146790 status.go:463] ha-367988 apiserver status = Running (err=<nil>)
	I0907 00:06:10.199133  146790 status.go:176] ha-367988 status: &{Name:ha-367988 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:06:10.199158  146790 status.go:174] checking status of ha-367988-m02 ...
	I0907 00:06:10.199484  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:10.199534  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.218746  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0907 00:06:10.219355  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.219884  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.219908  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.220262  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.220505  146790 main.go:141] libmachine: (ha-367988-m02) Calling .GetState
	I0907 00:06:10.222330  146790 status.go:371] ha-367988-m02 host status = "Stopped" (err=<nil>)
	I0907 00:06:10.222348  146790 status.go:384] host is not running, skipping remaining checks
	I0907 00:06:10.222356  146790 status.go:176] ha-367988-m02 status: &{Name:ha-367988-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:06:10.222386  146790 status.go:174] checking status of ha-367988-m03 ...
	I0907 00:06:10.222693  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:10.222766  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.238435  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0907 00:06:10.238877  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.239360  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.239381  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.239739  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.239943  146790 main.go:141] libmachine: (ha-367988-m03) Calling .GetState
	I0907 00:06:10.241875  146790 status.go:371] ha-367988-m03 host status = "Running" (err=<nil>)
	I0907 00:06:10.241899  146790 host.go:66] Checking if "ha-367988-m03" exists ...
	I0907 00:06:10.242195  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:10.242237  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.258108  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0907 00:06:10.258608  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.259106  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.259131  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.259553  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.259775  146790 main.go:141] libmachine: (ha-367988-m03) Calling .GetIP
	I0907 00:06:10.263038  146790 main.go:141] libmachine: (ha-367988-m03) DBG | domain ha-367988-m03 has defined MAC address 52:54:00:c1:6b:9b in network mk-ha-367988
	I0907 00:06:10.263495  146790 main.go:141] libmachine: (ha-367988-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:6b:9b", ip: ""} in network mk-ha-367988: {Iface:virbr1 ExpiryTime:2025-09-07 01:02:07 +0000 UTC Type:0 Mac:52:54:00:c1:6b:9b Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-367988-m03 Clientid:01:52:54:00:c1:6b:9b}
	I0907 00:06:10.263521  146790 main.go:141] libmachine: (ha-367988-m03) DBG | domain ha-367988-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:c1:6b:9b in network mk-ha-367988
	I0907 00:06:10.263699  146790 host.go:66] Checking if "ha-367988-m03" exists ...
	I0907 00:06:10.264020  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:10.264074  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.280254  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0907 00:06:10.280816  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.281395  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.281414  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.281817  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.282096  146790 main.go:141] libmachine: (ha-367988-m03) Calling .DriverName
	I0907 00:06:10.282326  146790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:06:10.282360  146790 main.go:141] libmachine: (ha-367988-m03) Calling .GetSSHHostname
	I0907 00:06:10.285534  146790 main.go:141] libmachine: (ha-367988-m03) DBG | domain ha-367988-m03 has defined MAC address 52:54:00:c1:6b:9b in network mk-ha-367988
	I0907 00:06:10.285997  146790 main.go:141] libmachine: (ha-367988-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:6b:9b", ip: ""} in network mk-ha-367988: {Iface:virbr1 ExpiryTime:2025-09-07 01:02:07 +0000 UTC Type:0 Mac:52:54:00:c1:6b:9b Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-367988-m03 Clientid:01:52:54:00:c1:6b:9b}
	I0907 00:06:10.286028  146790 main.go:141] libmachine: (ha-367988-m03) DBG | domain ha-367988-m03 has defined IP address 192.168.39.113 and MAC address 52:54:00:c1:6b:9b in network mk-ha-367988
	I0907 00:06:10.286290  146790 main.go:141] libmachine: (ha-367988-m03) Calling .GetSSHPort
	I0907 00:06:10.286555  146790 main.go:141] libmachine: (ha-367988-m03) Calling .GetSSHKeyPath
	I0907 00:06:10.286733  146790 main.go:141] libmachine: (ha-367988-m03) Calling .GetSSHUsername
	I0907 00:06:10.286874  146790 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/ha-367988-m03/id_rsa Username:docker}
	I0907 00:06:10.374946  146790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:06:10.397678  146790 kubeconfig.go:125] found "ha-367988" server: "https://192.168.39.254:8443"
	I0907 00:06:10.397710  146790 api_server.go:166] Checking apiserver status ...
	I0907 00:06:10.397747  146790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:06:10.421576  146790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup
	W0907 00:06:10.435327  146790 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1830/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:06:10.435404  146790 ssh_runner.go:195] Run: ls
	I0907 00:06:10.441139  146790 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0907 00:06:10.446022  146790 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0907 00:06:10.446054  146790 status.go:463] ha-367988-m03 apiserver status = Running (err=<nil>)
	I0907 00:06:10.446064  146790 status.go:176] ha-367988-m03 status: &{Name:ha-367988-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:06:10.446087  146790 status.go:174] checking status of ha-367988-m04 ...
	I0907 00:06:10.446456  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:10.446504  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.463782  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
	I0907 00:06:10.464275  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.464779  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.464805  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.465228  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.465442  146790 main.go:141] libmachine: (ha-367988-m04) Calling .GetState
	I0907 00:06:10.467242  146790 status.go:371] ha-367988-m04 host status = "Running" (err=<nil>)
	I0907 00:06:10.467260  146790 host.go:66] Checking if "ha-367988-m04" exists ...
	I0907 00:06:10.467556  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:10.467600  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.483677  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
	I0907 00:06:10.484183  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.484651  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.484675  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.485044  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.485285  146790 main.go:141] libmachine: (ha-367988-m04) Calling .GetIP
	I0907 00:06:10.488427  146790 main.go:141] libmachine: (ha-367988-m04) DBG | domain ha-367988-m04 has defined MAC address 52:54:00:32:f6:91 in network mk-ha-367988
	I0907 00:06:10.488912  146790 main.go:141] libmachine: (ha-367988-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f6:91", ip: ""} in network mk-ha-367988: {Iface:virbr1 ExpiryTime:2025-09-07 01:03:48 +0000 UTC Type:0 Mac:52:54:00:32:f6:91 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-367988-m04 Clientid:01:52:54:00:32:f6:91}
	I0907 00:06:10.488955  146790 main.go:141] libmachine: (ha-367988-m04) DBG | domain ha-367988-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:32:f6:91 in network mk-ha-367988
	I0907 00:06:10.489185  146790 host.go:66] Checking if "ha-367988-m04" exists ...
	I0907 00:06:10.489487  146790 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:06:10.489531  146790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:06:10.506957  146790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35013
	I0907 00:06:10.507409  146790 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:06:10.507946  146790 main.go:141] libmachine: Using API Version  1
	I0907 00:06:10.507966  146790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:06:10.508324  146790 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:06:10.508509  146790 main.go:141] libmachine: (ha-367988-m04) Calling .DriverName
	I0907 00:06:10.508705  146790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:06:10.508726  146790 main.go:141] libmachine: (ha-367988-m04) Calling .GetSSHHostname
	I0907 00:06:10.512284  146790 main.go:141] libmachine: (ha-367988-m04) DBG | domain ha-367988-m04 has defined MAC address 52:54:00:32:f6:91 in network mk-ha-367988
	I0907 00:06:10.512705  146790 main.go:141] libmachine: (ha-367988-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f6:91", ip: ""} in network mk-ha-367988: {Iface:virbr1 ExpiryTime:2025-09-07 01:03:48 +0000 UTC Type:0 Mac:52:54:00:32:f6:91 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-367988-m04 Clientid:01:52:54:00:32:f6:91}
	I0907 00:06:10.512738  146790 main.go:141] libmachine: (ha-367988-m04) DBG | domain ha-367988-m04 has defined IP address 192.168.39.185 and MAC address 52:54:00:32:f6:91 in network mk-ha-367988
	I0907 00:06:10.512972  146790 main.go:141] libmachine: (ha-367988-m04) Calling .GetSSHPort
	I0907 00:06:10.513192  146790 main.go:141] libmachine: (ha-367988-m04) Calling .GetSSHKeyPath
	I0907 00:06:10.513346  146790 main.go:141] libmachine: (ha-367988-m04) Calling .GetSSHUsername
	I0907 00:06:10.513476  146790 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/ha-367988-m04/id_rsa Username:docker}
	I0907 00:06:10.607363  146790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:06:10.630670  146790 status.go:176] ha-367988-m04 status: &{Name:ha-367988-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 node start m02 --alsologtostderr -v 5
E0907 00:06:24.666061  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 node start m02 --alsologtostderr -v 5: (36.823532469s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5: (1.086146937s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.100348878s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (413.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 stop --alsologtostderr -v 5
E0907 00:08:40.803430  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:09:08.508444  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:09:25.452016  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 stop --alsologtostderr -v 5: (4m34.830243979s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 start --wait true --alsologtostderr -v 5
E0907 00:13:40.803168  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 start --wait true --alsologtostderr -v 5: (2m18.956049307s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (413.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 node delete m03 --alsologtostderr -v 5: (17.870911025s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 stop --alsologtostderr -v 5
E0907 00:14:25.452306  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:15:48.518647  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 stop --alsologtostderr -v 5: (4m32.320164644s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5: exit status 7 (118.672049ms)

                                                
                                                
-- stdout --
	ha-367988
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367988-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367988-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:18:36.187118  151261 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:18:36.187372  151261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:18:36.187381  151261 out.go:374] Setting ErrFile to fd 2...
	I0907 00:18:36.187384  151261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:18:36.187594  151261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:18:36.187911  151261 out.go:368] Setting JSON to false
	I0907 00:18:36.187947  151261 mustload.go:65] Loading cluster: ha-367988
	I0907 00:18:36.188057  151261 notify.go:220] Checking for updates...
	I0907 00:18:36.189225  151261 config.go:182] Loaded profile config "ha-367988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:18:36.189338  151261 status.go:174] checking status of ha-367988 ...
	I0907 00:18:36.190020  151261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:18:36.190075  151261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:18:36.210800  151261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43437
	I0907 00:18:36.211415  151261 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:18:36.211979  151261 main.go:141] libmachine: Using API Version  1
	I0907 00:18:36.212011  151261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:18:36.212431  151261 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:18:36.212706  151261 main.go:141] libmachine: (ha-367988) Calling .GetState
	I0907 00:18:36.214694  151261 status.go:371] ha-367988 host status = "Stopped" (err=<nil>)
	I0907 00:18:36.214716  151261 status.go:384] host is not running, skipping remaining checks
	I0907 00:18:36.214738  151261 status.go:176] ha-367988 status: &{Name:ha-367988 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:18:36.214772  151261 status.go:174] checking status of ha-367988-m02 ...
	I0907 00:18:36.215135  151261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:18:36.215201  151261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:18:36.231230  151261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44353
	I0907 00:18:36.231797  151261 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:18:36.232343  151261 main.go:141] libmachine: Using API Version  1
	I0907 00:18:36.232367  151261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:18:36.232730  151261 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:18:36.232946  151261 main.go:141] libmachine: (ha-367988-m02) Calling .GetState
	I0907 00:18:36.234999  151261 status.go:371] ha-367988-m02 host status = "Stopped" (err=<nil>)
	I0907 00:18:36.235018  151261 status.go:384] host is not running, skipping remaining checks
	I0907 00:18:36.235024  151261 status.go:176] ha-367988-m02 status: &{Name:ha-367988-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:18:36.235043  151261 status.go:174] checking status of ha-367988-m04 ...
	I0907 00:18:36.235360  151261 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:18:36.235420  151261 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:18:36.251423  151261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0907 00:18:36.251875  151261 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:18:36.252364  151261 main.go:141] libmachine: Using API Version  1
	I0907 00:18:36.252392  151261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:18:36.252808  151261 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:18:36.253004  151261 main.go:141] libmachine: (ha-367988-m04) Calling .GetState
	I0907 00:18:36.254718  151261 status.go:371] ha-367988-m04 host status = "Stopped" (err=<nil>)
	I0907 00:18:36.254740  151261 status.go:384] host is not running, skipping remaining checks
	I0907 00:18:36.254748  151261 status.go:176] ha-367988-m04 status: &{Name:ha-367988-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (133.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0907 00:18:40.803420  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:19:25.453591  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:20:03.872613  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (2m12.637455364s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (133.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (94.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-367988 node add --control-plane --alsologtostderr -v 5: (1m33.582818611s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-367988 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (94.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-386015 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E0907 00:23:40.807657  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-386015 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.92618053s)
--- PASS: TestJSONOutput/start/Command (83.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.86s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-386015 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-386015 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-386015 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-386015 --output=json --user=testUser: (7.368674496s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-407008 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-407008 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.758717ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fb2249fa-a612-4a82-9f49-e6f196f6997e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-407008] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ef29f99-769c-4a32-86b1-5004d547dda2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21132"}}
	{"specversion":"1.0","id":"46d2e31b-15cf-42ea-b2ca-8bef2dd843a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"364ea6b4-6524-4c39-8207-d81c5f9b3535","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig"}}
	{"specversion":"1.0","id":"ac38f7ba-a914-4cef-90a2-9c6b7c642416","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube"}}
	{"specversion":"1.0","id":"bed414c9-3bad-44a3-9afa-98411426dd4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8433cb8d-4430-4fa3-b1f5-118d7a6d53bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b0b087b5-1a6d-43f5-af08-a37b2fd584c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-407008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-407008
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-003835 --driver=kvm2  --container-runtime=crio
E0907 00:24:25.453018  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-003835 --driver=kvm2  --container-runtime=crio: (44.842721021s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-062487 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-062487 --driver=kvm2  --container-runtime=crio: (45.181531854s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-003835
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-062487
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-062487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-062487
helpers_test.go:175: Cleaning up "first-003835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-003835
--- PASS: TestMinikubeProfile (92.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-031657 --memory=3072 --mount-string /tmp/TestMountStartserial1765767837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-031657 --memory=3072 --mount-string /tmp/TestMountStartserial1765767837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.649597366s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-031657 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-031657 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-048905 --memory=3072 --mount-string /tmp/TestMountStartserial1765767837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-048905 --memory=3072 --mount-string /tmp/TestMountStartserial1765767837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.348611217s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-048905 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-048905 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-031657 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-048905 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-048905 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-048905
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-048905: (1.333098537s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.38s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-048905
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-048905: (21.382354645s)
--- PASS: TestMountStart/serial/RestartStopped (22.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-048905 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-048905 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665460 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0907 00:28:40.803547  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665460 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.035033412s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-665460 -- rollout status deployment/busybox: (3.942052694s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-shkgz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-zffrn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-shkgz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-zffrn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-shkgz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-zffrn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-shkgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-shkgz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-zffrn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-665460 -- exec busybox-7b57f96db7-zffrn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-665460 -v=5 --alsologtostderr
E0907 00:29:25.452687  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-665460 -v=5 --alsologtostderr: (47.686871144s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-665460 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp testdata/cp-test.txt multinode-665460:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile161424710/001/cp-test_multinode-665460.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460:/home/docker/cp-test.txt multinode-665460-m02:/home/docker/cp-test_multinode-665460_multinode-665460-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m02 "sudo cat /home/docker/cp-test_multinode-665460_multinode-665460-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460:/home/docker/cp-test.txt multinode-665460-m03:/home/docker/cp-test_multinode-665460_multinode-665460-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m03 "sudo cat /home/docker/cp-test_multinode-665460_multinode-665460-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp testdata/cp-test.txt multinode-665460-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile161424710/001/cp-test_multinode-665460-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460-m02:/home/docker/cp-test.txt multinode-665460:/home/docker/cp-test_multinode-665460-m02_multinode-665460.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460 "sudo cat /home/docker/cp-test_multinode-665460-m02_multinode-665460.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460-m02:/home/docker/cp-test.txt multinode-665460-m03:/home/docker/cp-test_multinode-665460-m02_multinode-665460-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m03 "sudo cat /home/docker/cp-test_multinode-665460-m02_multinode-665460-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp testdata/cp-test.txt multinode-665460-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile161424710/001/cp-test_multinode-665460-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460-m03:/home/docker/cp-test.txt multinode-665460:/home/docker/cp-test_multinode-665460-m03_multinode-665460.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460 "sudo cat /home/docker/cp-test_multinode-665460-m03_multinode-665460.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 cp multinode-665460-m03:/home/docker/cp-test.txt multinode-665460-m02:/home/docker/cp-test_multinode-665460-m03_multinode-665460-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 ssh -n multinode-665460-m02 "sudo cat /home/docker/cp-test_multinode-665460-m03_multinode-665460-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-665460 node stop m03: (2.303437248s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665460 status: exit status 7 (471.489558ms)

                                                
                                                
-- stdout --
	multinode-665460
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-665460-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-665460-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr: exit status 7 (473.841544ms)

                                                
                                                
-- stdout --
	multinode-665460
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-665460-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-665460-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:29:57.587390  158998 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:29:57.587640  158998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:29:57.587649  158998 out.go:374] Setting ErrFile to fd 2...
	I0907 00:29:57.587653  158998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:29:57.587876  158998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:29:57.588051  158998 out.go:368] Setting JSON to false
	I0907 00:29:57.588082  158998 mustload.go:65] Loading cluster: multinode-665460
	I0907 00:29:57.588226  158998 notify.go:220] Checking for updates...
	I0907 00:29:57.588490  158998 config.go:182] Loaded profile config "multinode-665460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:29:57.588511  158998 status.go:174] checking status of multinode-665460 ...
	I0907 00:29:57.588969  158998 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:29:57.589010  158998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:29:57.606423  158998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0907 00:29:57.606944  158998 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:29:57.607595  158998 main.go:141] libmachine: Using API Version  1
	I0907 00:29:57.607647  158998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:29:57.608024  158998 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:29:57.608235  158998 main.go:141] libmachine: (multinode-665460) Calling .GetState
	I0907 00:29:57.610134  158998 status.go:371] multinode-665460 host status = "Running" (err=<nil>)
	I0907 00:29:57.610156  158998 host.go:66] Checking if "multinode-665460" exists ...
	I0907 00:29:57.610626  158998 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:29:57.610685  158998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:29:57.627939  158998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I0907 00:29:57.628429  158998 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:29:57.629012  158998 main.go:141] libmachine: Using API Version  1
	I0907 00:29:57.629038  158998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:29:57.629429  158998 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:29:57.629649  158998 main.go:141] libmachine: (multinode-665460) Calling .GetIP
	I0907 00:29:57.632457  158998 main.go:141] libmachine: (multinode-665460) DBG | domain multinode-665460 has defined MAC address 52:54:00:7d:6e:9a in network mk-multinode-665460
	I0907 00:29:57.632860  158998 main.go:141] libmachine: (multinode-665460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:6e:9a", ip: ""} in network mk-multinode-665460: {Iface:virbr1 ExpiryTime:2025-09-07 01:27:14 +0000 UTC Type:0 Mac:52:54:00:7d:6e:9a Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-665460 Clientid:01:52:54:00:7d:6e:9a}
	I0907 00:29:57.632906  158998 main.go:141] libmachine: (multinode-665460) DBG | domain multinode-665460 has defined IP address 192.168.39.97 and MAC address 52:54:00:7d:6e:9a in network mk-multinode-665460
	I0907 00:29:57.633060  158998 host.go:66] Checking if "multinode-665460" exists ...
	I0907 00:29:57.633419  158998 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:29:57.633469  158998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:29:57.650612  158998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40601
	I0907 00:29:57.651132  158998 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:29:57.651711  158998 main.go:141] libmachine: Using API Version  1
	I0907 00:29:57.651736  158998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:29:57.652065  158998 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:29:57.652303  158998 main.go:141] libmachine: (multinode-665460) Calling .DriverName
	I0907 00:29:57.652523  158998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:29:57.652549  158998 main.go:141] libmachine: (multinode-665460) Calling .GetSSHHostname
	I0907 00:29:57.655562  158998 main.go:141] libmachine: (multinode-665460) DBG | domain multinode-665460 has defined MAC address 52:54:00:7d:6e:9a in network mk-multinode-665460
	I0907 00:29:57.656155  158998 main.go:141] libmachine: (multinode-665460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:6e:9a", ip: ""} in network mk-multinode-665460: {Iface:virbr1 ExpiryTime:2025-09-07 01:27:14 +0000 UTC Type:0 Mac:52:54:00:7d:6e:9a Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-665460 Clientid:01:52:54:00:7d:6e:9a}
	I0907 00:29:57.656189  158998 main.go:141] libmachine: (multinode-665460) DBG | domain multinode-665460 has defined IP address 192.168.39.97 and MAC address 52:54:00:7d:6e:9a in network mk-multinode-665460
	I0907 00:29:57.656367  158998 main.go:141] libmachine: (multinode-665460) Calling .GetSSHPort
	I0907 00:29:57.656597  158998 main.go:141] libmachine: (multinode-665460) Calling .GetSSHKeyPath
	I0907 00:29:57.656848  158998 main.go:141] libmachine: (multinode-665460) Calling .GetSSHUsername
	I0907 00:29:57.657030  158998 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/multinode-665460/id_rsa Username:docker}
	I0907 00:29:57.742484  158998 ssh_runner.go:195] Run: systemctl --version
	I0907 00:29:57.749785  158998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:29:57.768665  158998 kubeconfig.go:125] found "multinode-665460" server: "https://192.168.39.97:8443"
	I0907 00:29:57.768703  158998 api_server.go:166] Checking apiserver status ...
	I0907 00:29:57.768739  158998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0907 00:29:57.790694  158998 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0907 00:29:57.803161  158998 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0907 00:29:57.803236  158998 ssh_runner.go:195] Run: ls
	I0907 00:29:57.809037  158998 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0907 00:29:57.813992  158998 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0907 00:29:57.814028  158998 status.go:463] multinode-665460 apiserver status = Running (err=<nil>)
	I0907 00:29:57.814043  158998 status.go:176] multinode-665460 status: &{Name:multinode-665460 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:29:57.814066  158998 status.go:174] checking status of multinode-665460-m02 ...
	I0907 00:29:57.814372  158998 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:29:57.814415  158998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:29:57.830681  158998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
	I0907 00:29:57.831259  158998 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:29:57.831790  158998 main.go:141] libmachine: Using API Version  1
	I0907 00:29:57.831814  158998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:29:57.832165  158998 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:29:57.832423  158998 main.go:141] libmachine: (multinode-665460-m02) Calling .GetState
	I0907 00:29:57.834082  158998 status.go:371] multinode-665460-m02 host status = "Running" (err=<nil>)
	I0907 00:29:57.834101  158998 host.go:66] Checking if "multinode-665460-m02" exists ...
	I0907 00:29:57.834384  158998 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:29:57.834455  158998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:29:57.850521  158998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0907 00:29:57.851064  158998 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:29:57.851585  158998 main.go:141] libmachine: Using API Version  1
	I0907 00:29:57.851610  158998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:29:57.851967  158998 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:29:57.852211  158998 main.go:141] libmachine: (multinode-665460-m02) Calling .GetIP
	I0907 00:29:57.855168  158998 main.go:141] libmachine: (multinode-665460-m02) DBG | domain multinode-665460-m02 has defined MAC address 52:54:00:03:d0:42 in network mk-multinode-665460
	I0907 00:29:57.855692  158998 main.go:141] libmachine: (multinode-665460-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:d0:42", ip: ""} in network mk-multinode-665460: {Iface:virbr1 ExpiryTime:2025-09-07 01:28:16 +0000 UTC Type:0 Mac:52:54:00:03:d0:42 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-665460-m02 Clientid:01:52:54:00:03:d0:42}
	I0907 00:29:57.855731  158998 main.go:141] libmachine: (multinode-665460-m02) DBG | domain multinode-665460-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:03:d0:42 in network mk-multinode-665460
	I0907 00:29:57.855882  158998 host.go:66] Checking if "multinode-665460-m02" exists ...
	I0907 00:29:57.856192  158998 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:29:57.856233  158998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:29:57.873334  158998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0907 00:29:57.873780  158998 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:29:57.874238  158998 main.go:141] libmachine: Using API Version  1
	I0907 00:29:57.874266  158998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:29:57.874688  158998 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:29:57.874905  158998 main.go:141] libmachine: (multinode-665460-m02) Calling .DriverName
	I0907 00:29:57.875114  158998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0907 00:29:57.875137  158998 main.go:141] libmachine: (multinode-665460-m02) Calling .GetSSHHostname
	I0907 00:29:57.878172  158998 main.go:141] libmachine: (multinode-665460-m02) DBG | domain multinode-665460-m02 has defined MAC address 52:54:00:03:d0:42 in network mk-multinode-665460
	I0907 00:29:57.878679  158998 main.go:141] libmachine: (multinode-665460-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:d0:42", ip: ""} in network mk-multinode-665460: {Iface:virbr1 ExpiryTime:2025-09-07 01:28:16 +0000 UTC Type:0 Mac:52:54:00:03:d0:42 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-665460-m02 Clientid:01:52:54:00:03:d0:42}
	I0907 00:29:57.878713  158998 main.go:141] libmachine: (multinode-665460-m02) DBG | domain multinode-665460-m02 has defined IP address 192.168.39.65 and MAC address 52:54:00:03:d0:42 in network mk-multinode-665460
	I0907 00:29:57.878900  158998 main.go:141] libmachine: (multinode-665460-m02) Calling .GetSSHPort
	I0907 00:29:57.879095  158998 main.go:141] libmachine: (multinode-665460-m02) Calling .GetSSHKeyPath
	I0907 00:29:57.879283  158998 main.go:141] libmachine: (multinode-665460-m02) Calling .GetSSHUsername
	I0907 00:29:57.879502  158998 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21132-128697/.minikube/machines/multinode-665460-m02/id_rsa Username:docker}
	I0907 00:29:57.969885  158998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0907 00:29:57.988033  158998 status.go:176] multinode-665460-m02 status: &{Name:multinode-665460-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:29:57.988075  158998 status.go:174] checking status of multinode-665460-m03 ...
	I0907 00:29:57.988391  158998 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:29:57.988434  158998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:29:58.004617  158998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0907 00:29:58.005163  158998 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:29:58.005700  158998 main.go:141] libmachine: Using API Version  1
	I0907 00:29:58.005723  158998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:29:58.006066  158998 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:29:58.006288  158998 main.go:141] libmachine: (multinode-665460-m03) Calling .GetState
	I0907 00:29:58.008049  158998 status.go:371] multinode-665460-m03 host status = "Stopped" (err=<nil>)
	I0907 00:29:58.008068  158998 status.go:384] host is not running, skipping remaining checks
	I0907 00:29:58.008075  158998 status.go:176] multinode-665460-m03 status: &{Name:multinode-665460-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-665460 node start m03 -v=5 --alsologtostderr: (38.749368568s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (356.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-665460
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-665460
E0907 00:32:28.522813  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:33:40.808264  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-665460: (3m3.944158492s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665460 --wait=true -v=5 --alsologtostderr
E0907 00:34:25.452674  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665460 --wait=true -v=5 --alsologtostderr: (2m52.873799656s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-665460
--- PASS: TestMultiNode/serial/RestartKeepsNodes (356.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-665460 node delete m03: (2.329547438s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 stop
E0907 00:36:43.876233  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:38:40.808287  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:39:25.452394  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-665460 stop: (3m1.520102689s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665460 status: exit status 7 (100.365014ms)

                                                
                                                
-- stdout --
	multinode-665460
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-665460-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr: exit status 7 (92.160356ms)

                                                
                                                
-- stdout --
	multinode-665460
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-665460-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:39:38.962648  161865 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:39:38.962760  161865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:39:38.962765  161865 out.go:374] Setting ErrFile to fd 2...
	I0907 00:39:38.962771  161865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:39:38.963020  161865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:39:38.963245  161865 out.go:368] Setting JSON to false
	I0907 00:39:38.963282  161865 mustload.go:65] Loading cluster: multinode-665460
	I0907 00:39:38.963339  161865 notify.go:220] Checking for updates...
	I0907 00:39:38.963716  161865 config.go:182] Loaded profile config "multinode-665460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:39:38.963738  161865 status.go:174] checking status of multinode-665460 ...
	I0907 00:39:38.964173  161865 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:39:38.964214  161865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:38.980173  161865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34677
	I0907 00:39:38.980649  161865 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:38.981365  161865 main.go:141] libmachine: Using API Version  1
	I0907 00:39:38.981407  161865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:38.981913  161865 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:38.982139  161865 main.go:141] libmachine: (multinode-665460) Calling .GetState
	I0907 00:39:38.983791  161865 status.go:371] multinode-665460 host status = "Stopped" (err=<nil>)
	I0907 00:39:38.983810  161865 status.go:384] host is not running, skipping remaining checks
	I0907 00:39:38.983817  161865 status.go:176] multinode-665460 status: &{Name:multinode-665460 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0907 00:39:38.983848  161865 status.go:174] checking status of multinode-665460-m02 ...
	I0907 00:39:38.984247  161865 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21132-128697/.minikube/bin/docker-machine-driver-kvm2
	I0907 00:39:38.984291  161865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0907 00:39:39.000829  161865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0907 00:39:39.001315  161865 main.go:141] libmachine: () Calling .GetVersion
	I0907 00:39:39.001800  161865 main.go:141] libmachine: Using API Version  1
	I0907 00:39:39.001827  161865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0907 00:39:39.002195  161865 main.go:141] libmachine: () Calling .GetMachineName
	I0907 00:39:39.002440  161865 main.go:141] libmachine: (multinode-665460-m02) Calling .GetState
	I0907 00:39:39.004434  161865 status.go:371] multinode-665460-m02 host status = "Stopped" (err=<nil>)
	I0907 00:39:39.004456  161865 status.go:384] host is not running, skipping remaining checks
	I0907 00:39:39.004465  161865 status.go:176] multinode-665460-m02 status: &{Name:multinode-665460-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665460 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665460 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m31.080993485s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-665460 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (91.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-665460
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665460-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-665460-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (68.949707ms)

                                                
                                                
-- stdout --
	* [multinode-665460-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-665460-m02' is duplicated with machine name 'multinode-665460-m02' in profile 'multinode-665460'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-665460-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-665460-m03 --driver=kvm2  --container-runtime=crio: (46.683932003s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-665460
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-665460: exit status 80 (244.74508ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-665460 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-665460-m03 already exists in multinode-665460-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-665460-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.99s)

                                                
                                    
x
+
TestScheduledStopUnix (115.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-095737 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-095737 --memory=3072 --driver=kvm2  --container-runtime=crio: (43.939094934s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095737 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-095737 -n scheduled-stop-095737
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095737 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0907 00:45:39.666462  133025 retry.go:31] will retry after 109.995µs: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.667653  133025 retry.go:31] will retry after 119.309µs: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.668795  133025 retry.go:31] will retry after 148.139µs: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.669943  133025 retry.go:31] will retry after 296.523µs: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.671092  133025 retry.go:31] will retry after 493.696µs: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.672239  133025 retry.go:31] will retry after 769µs: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.673381  133025 retry.go:31] will retry after 974.422µs: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.674524  133025 retry.go:31] will retry after 1.897193ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.676725  133025 retry.go:31] will retry after 1.281573ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.678986  133025 retry.go:31] will retry after 2.295538ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.682204  133025 retry.go:31] will retry after 6.146458ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.689513  133025 retry.go:31] will retry after 5.248098ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.695823  133025 retry.go:31] will retry after 10.111447ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.706652  133025 retry.go:31] will retry after 12.506271ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.719981  133025 retry.go:31] will retry after 30.302596ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
I0907 00:45:39.751300  133025 retry.go:31] will retry after 44.772616ms: open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/scheduled-stop-095737/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095737 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095737 -n scheduled-stop-095737
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095737
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095737 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095737
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-095737: exit status 7 (78.230933ms)

                                                
                                                
-- stdout --
	scheduled-stop-095737
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095737 -n scheduled-stop-095737
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095737 -n scheduled-stop-095737: exit status 7 (69.520885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-095737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-095737
--- PASS: TestScheduledStopUnix (115.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (187.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2834639591 start -p running-upgrade-239150 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2834639591 start -p running-upgrade-239150 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m20.598689005s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-239150 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0907 00:53:23.877780  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-239150 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m45.66506117s)
helpers_test.go:175: Cleaning up "running-upgrade-239150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-239150
--- PASS: TestRunningBinaryUpgrade (187.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (181.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0907 00:49:08.525053  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.068180893s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-155539
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-155539: (2.336404456s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-155539 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-155539 status --format={{.Host}}: exit status 7 (90.407764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.976364017s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-155539 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (100.961362ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-155539] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-155539
	    minikube start -p kubernetes-upgrade-155539 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1555392 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-155539 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-155539 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.675251889s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-155539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-155539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-155539: (1.014522119s)
--- PASS: TestKubernetesUpgrade (181.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425572 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-425572 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (94.008796ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-425572] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (123.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425572 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425572 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m2.868129997s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-425572 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (123.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (15.193918369s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-425572 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-425572 status -o json: exit status 2 (291.870549ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-425572","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-425572
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.013772857s)
--- PASS: TestNoKubernetes/serial/Start (48.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (150.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1076481975 start -p stopped-upgrade-175063 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E0907 00:49:25.454129  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1076481975 start -p stopped-upgrade-175063 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m25.26159469s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1076481975 -p stopped-upgrade-175063 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1076481975 -p stopped-upgrade-175063 stop: (2.133657124s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-175063 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-175063 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.932334762s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (150.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-425572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-425572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.05843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-425572
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-425572: (1.312380602s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (70.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425572 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425572 --driver=kvm2  --container-runtime=crio: (1m10.273369469s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (70.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-425572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-425572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.865482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-175063
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-175063: (1.193536015s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestPause/serial/Start (114.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-257218 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-257218 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m54.543745333s)
--- PASS: TestPause/serial/Start (114.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-513546 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-513546 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (126.219635ms)

                                                
                                                
-- stdout --
	* [false-513546] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21132
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0907 00:52:14.935198  170282 out.go:360] Setting OutFile to fd 1 ...
	I0907 00:52:14.935504  170282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:52:14.935517  170282 out.go:374] Setting ErrFile to fd 2...
	I0907 00:52:14.935524  170282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0907 00:52:14.935758  170282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21132-128697/.minikube/bin
	I0907 00:52:14.936490  170282 out.go:368] Setting JSON to false
	I0907 00:52:14.937591  170282 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5678,"bootTime":1757200657,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0907 00:52:14.937664  170282 start.go:140] virtualization: kvm guest
	I0907 00:52:14.939726  170282 out.go:179] * [false-513546] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0907 00:52:14.941018  170282 out.go:179]   - MINIKUBE_LOCATION=21132
	I0907 00:52:14.941061  170282 notify.go:220] Checking for updates...
	I0907 00:52:14.943282  170282 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0907 00:52:14.944799  170282 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21132-128697/kubeconfig
	I0907 00:52:14.946208  170282 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21132-128697/.minikube
	I0907 00:52:14.947488  170282 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0907 00:52:14.948725  170282 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0907 00:52:14.950696  170282 config.go:182] Loaded profile config "cert-expiration-456862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:52:14.950988  170282 config.go:182] Loaded profile config "pause-257218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0907 00:52:14.951165  170282 config.go:182] Loaded profile config "running-upgrade-239150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0907 00:52:14.951369  170282 driver.go:421] Setting default libvirt URI to qemu:///system
	I0907 00:52:14.996754  170282 out.go:179] * Using the kvm2 driver based on user configuration
	I0907 00:52:14.997839  170282 start.go:304] selected driver: kvm2
	I0907 00:52:14.997856  170282 start.go:918] validating driver "kvm2" against <nil>
	I0907 00:52:14.997885  170282 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0907 00:52:14.999703  170282 out.go:203] 
	W0907 00:52:15.001069  170282 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0907 00:52:15.002358  170282 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-513546 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-513546" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Sep 2025 00:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.50.77:8443
name: cert-expiration-456862
contexts:
- context:
cluster: cert-expiration-456862
extensions:
- extension:
last-update: Sun, 07 Sep 2025 00:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: cert-expiration-456862
name: cert-expiration-456862
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-456862
user:
client-certificate: /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/cert-expiration-456862/client.crt
client-key: /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/cert-expiration-456862/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-513546

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513546"

                                                
                                                
----------------------- debugLogs end: false-513546 [took: 5.587108098s] --------------------------------
helpers_test.go:175: Cleaning up "false-513546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-513546
--- PASS: TestNetworkPlugins/group/false (5.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-477870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-477870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (2m24.541561596s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-752207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0907 00:53:40.803234  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-752207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m29.619785577s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-631721 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0907 00:54:25.452822  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-631721 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m29.324664988s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-477870 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6f3678d1-60e7-4da6-9ae5-38c339fe4673] Pending
helpers_test.go:352: "busybox" [6f3678d1-60e7-4da6-9ae5-38c339fe4673] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6f3678d1-60e7-4da6-9ae5-38c339fe4673] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.00700597s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-477870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-079989 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-079989 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m28.494846392s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-752207 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [09ab6ce3-9498-45af-8595-c2ca1862f27a] Pending
helpers_test.go:352: "busybox" [09ab6ce3-9498-45af-8595-c2ca1862f27a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [09ab6ce3-9498-45af-8595-c2ca1862f27a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004199881s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-752207 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-477870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-477870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.305156539s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-477870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-477870 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-477870 --alsologtostderr -v=3: (1m31.068093364s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-752207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-752207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.117380303s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-752207 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-752207 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-752207 --alsologtostderr -v=3: (1m31.360944943s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-631721 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [15fcf3f9-6791-4f87-99a4-0f1b83dd4fac] Pending
helpers_test.go:352: "busybox" [15fcf3f9-6791-4f87-99a4-0f1b83dd4fac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [15fcf3f9-6791-4f87-99a4-0f1b83dd4fac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005688527s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-631721 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-631721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-631721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030971528s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-631721 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-631721 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-631721 --alsologtostderr -v=3: (1m31.122192951s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-079989 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f7047fe4-ea33-48a7-85c4-23c365155b8d] Pending
helpers_test.go:352: "busybox" [f7047fe4-ea33-48a7-85c4-23c365155b8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f7047fe4-ea33-48a7-85c4-23c365155b8d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003810579s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-079989 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-079989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002580141s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-079989 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-079989 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-079989 --alsologtostderr -v=3: (1m31.566391324s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-477870 -n old-k8s-version-477870
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-477870 -n old-k8s-version-477870: exit status 7 (79.406912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-477870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-477870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-477870 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (48.049896139s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-477870 -n old-k8s-version-477870
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-752207 -n no-preload-752207
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-752207 -n no-preload-752207: exit status 7 (81.02892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-752207 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (72.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-752207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-752207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.240549073s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-752207 -n no-preload-752207
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (72.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5x2hx" [357842c1-3895-45ab-bbde-6475aca68c13] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5x2hx" [357842c1-3895-45ab-bbde-6475aca68c13] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.003907827s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-631721 -n embed-certs-631721
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-631721 -n embed-certs-631721: exit status 7 (80.683611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-631721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-631721 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-631721 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (53.25700104s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-631721 -n embed-certs-631721
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5x2hx" [357842c1-3895-45ab-bbde-6475aca68c13] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004714276s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-477870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-477870 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-477870 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-477870 --alsologtostderr -v=1: (1.231038516s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-477870 -n old-k8s-version-477870
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-477870 -n old-k8s-version-477870: exit status 2 (309.227552ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-477870 -n old-k8s-version-477870
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-477870 -n old-k8s-version-477870: exit status 2 (319.338638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-477870 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-477870 -n old-k8s-version-477870
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-477870 -n old-k8s-version-477870
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-101218 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-101218 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m4.918928089s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kbrbk" [ba861824-b898-41ae-ad89-0cd92f9116a0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.441832852s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kbrbk" [ba861824-b898-41ae-ad89-0cd92f9116a0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00554827s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-752207 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989: exit status 7 (82.484026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-079989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (71.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-079989 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-079989 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m10.628703305s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (71.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-752207 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-752207 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-752207 --alsologtostderr -v=1: (1.071587011s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-752207 -n no-preload-752207
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-752207 -n no-preload-752207: exit status 2 (338.103492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-752207 -n no-preload-752207
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-752207 -n no-preload-752207: exit status 2 (284.188926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-752207 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-752207 -n no-preload-752207
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-752207 -n no-preload-752207
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (121.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m1.897810719s)
--- PASS: TestNetworkPlugins/group/auto/Start (121.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r2x42" [726f4c97-8d55-4740-8d2a-63a7d6c58847] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r2x42" [726f4c97-8d55-4740-8d2a-63a7d6c58847] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004949876s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-r2x42" [726f4c97-8d55-4740-8d2a-63a7d6c58847] Running
E0907 00:58:40.803842  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/functional-445996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013067712s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-631721 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-631721 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-631721 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-631721 --alsologtostderr -v=1: (1.255384824s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-631721 -n embed-certs-631721
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-631721 -n embed-certs-631721: exit status 2 (330.458647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-631721 -n embed-certs-631721
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-631721 -n embed-certs-631721: exit status 2 (327.648595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-631721 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-631721 --alsologtostderr -v=1: (1.124736599s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-631721 -n embed-certs-631721
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-631721 -n embed-certs-631721
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m29.68309067s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-101218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-101218 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.494624032s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-101218 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-101218 --alsologtostderr -v=3: (8.475094986s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-101218 -n newest-cni-101218
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-101218 -n newest-cni-101218: exit status 7 (88.47172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-101218 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (74.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-101218 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-101218 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m14.36131143s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-101218 -n newest-cni-101218
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (74.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dwx8v" [76aa7db5-3125-41f9-8adf-71511415b156] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005596075s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dwx8v" [76aa7db5-3125-41f9-8adf-71511415b156] Running
E0907 00:59:25.451978  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/addons-331285/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004709174s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-079989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-079989 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-079989 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989: exit status 2 (293.575805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989: exit status 2 (290.325595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-079989 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-079989 -n default-k8s-diff-port-079989
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0907 00:59:50.265973  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:50.272439  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:50.283924  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:50.305429  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:50.346947  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:50.428514  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:50.590067  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:50.912240  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:51.554139  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:52.836319  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 00:59:55.397750  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:00.519408  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:00.730241  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:00.737378  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:00.749109  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:00.770923  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:00.812766  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:00.894309  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:01.055925  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:01.378032  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:02.019864  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:03.301505  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:05.862910  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:10.761328  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:00:10.985244  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m44.945804039s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-513546 "pgrep -a kubelet"
I0907 01:00:15.824547  133025 config.go:182] Loaded profile config "auto-513546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-513546 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-strs7" [b22756f6-705f-4aca-a0df-a06b9b29bba0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-strs7" [b22756f6-705f-4aca-a0df-a06b9b29bba0] Running
E0907 01:00:21.227654  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005903911s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-101218 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-101218 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-101218 -n newest-cni-101218
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-101218 -n newest-cni-101218: exit status 2 (285.908671ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-101218 -n newest-cni-101218
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-101218 -n newest-cni-101218: exit status 2 (280.956047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-101218 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-101218 -n newest-cni-101218
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-101218 -n newest-cni-101218
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4d8pg" [d0e5c0c2-1564-46c9-8e7f-b8ebb8784d4d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00530361s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m27.060470015s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-513546 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-513546 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-513546 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dz8cq" [de32866a-e2ff-4ab7-80b9-5f478cb84a11] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0907 01:00:31.243332  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dz8cq" [de32866a-e2ff-4ab7-80b9-5f478cb84a11] Running
E0907 01:00:41.709494  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.004648373s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-513546 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m39.99410086s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (101.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0907 01:01:12.205004  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m41.981579499s)
--- PASS: TestNetworkPlugins/group/flannel/Start (101.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-55r9n" [7917aee5-ccd4-4900-b781-fa24cd4a69e6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0907 01:01:22.257460  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.263927  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.275407  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.296884  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.339143  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.420779  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.582246  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.671518  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:22.904075  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:23.545476  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006241771s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-513546 "pgrep -a kubelet"
E0907 01:01:24.827308  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0907 01:01:25.092772  133025 config.go:182] Loaded profile config "calico-513546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-513546 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x6jpw" [0095ba04-94ac-4925-a28f-31e72d093a45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0907 01:01:27.389630  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:01:32.511068  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-x6jpw" [0095ba04-94ac-4925-a28f-31e72d093a45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.005181213s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-513546 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-513546 "pgrep -a kubelet"
I0907 01:01:51.663704  133025 config.go:182] Loaded profile config "custom-flannel-513546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-513546 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gzq5k" [8b5fc222-37f4-421d-b99b-fa3f90e9ae09] Pending
helpers_test.go:352: "netcat-cd4db9dbf-gzq5k" [8b5fc222-37f4-421d-b99b-fa3f90e9ae09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gzq5k" [8b5fc222-37f4-421d-b99b-fa3f90e9ae09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004486451s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0907 01:02:03.234577  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-513546 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m27.695834806s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-513546 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-513546 "pgrep -a kubelet"
I0907 01:02:24.827520  133025 config.go:182] Loaded profile config "enable-default-cni-513546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-513546 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5zxkr" [feb0bb40-e8b6-4ef2-b69c-b5cb628ebc78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5zxkr" [feb0bb40-e8b6-4ef2-b69c-b5cb628ebc78] Running
E0907 01:02:34.127952  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/old-k8s-version-477870/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005297585s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-513546 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nll4f" [71aa4979-352c-46da-87f0-a7bfea4f590b] Running
E0907 01:02:44.196799  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/default-k8s-diff-port-079989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0907 01:02:44.593956  133025 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/no-preload-752207/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00524517s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-513546 "pgrep -a kubelet"
I0907 01:02:49.334865  133025 config.go:182] Loaded profile config "flannel-513546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-513546 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rljvj" [e455e681-93e4-4b5d-b394-be01978ecd02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rljvj" [e455e681-93e4-4b5d-b394-be01978ecd02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004082529s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-513546 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-513546 "pgrep -a kubelet"
I0907 01:03:28.249772  133025 config.go:182] Loaded profile config "bridge-513546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-513546 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c9mbg" [3e33b488-4638-49a6-8207-d77caaa58bf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c9mbg" [3e33b488-4638-49a6-8207-d77caaa58bf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005396193s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-513546 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-513546 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
272 TestStartStop/group/disable-driver-mounts 0.4
278 TestNetworkPlugins/group/kubenet 4.3
286 TestNetworkPlugins/group/cilium 4.67
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-331285 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-203131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-203131
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-513546 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-513546" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Sep 2025 00:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.50.77:8443
name: cert-expiration-456862
contexts:
- context:
cluster: cert-expiration-456862
extensions:
- extension:
last-update: Sun, 07 Sep 2025 00:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: cert-expiration-456862
name: cert-expiration-456862
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-456862
user:
client-certificate: /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/cert-expiration-456862/client.crt
client-key: /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/cert-expiration-456862/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-513546

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513546"

                                                
                                                
----------------------- debugLogs end: kubenet-513546 [took: 4.107748405s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-513546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-513546
--- SKIP: TestNetworkPlugins/group/kubenet (4.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-513546 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-513546" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21132-128697/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Sep 2025 00:52:21 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.50.77:8443
name: cert-expiration-456862
contexts:
- context:
cluster: cert-expiration-456862
extensions:
- extension:
last-update: Sun, 07 Sep 2025 00:52:21 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: cert-expiration-456862
name: cert-expiration-456862
current-context: cert-expiration-456862
kind: Config
preferences: {}
users:
- name: cert-expiration-456862
user:
client-certificate: /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/cert-expiration-456862/client.crt
client-key: /home/jenkins/minikube-integration/21132-128697/.minikube/profiles/cert-expiration-456862/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-513546

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-513546" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513546"

                                                
                                                
----------------------- debugLogs end: cilium-513546 [took: 4.505253749s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-513546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-513546
--- SKIP: TestNetworkPlugins/group/cilium (4.67s)

                                                
                                    
Copied to clipboard