Test Report: KVM_Linux_crio 21409

                    
                      0aa34a444c66e47b3763835c9f1ccee8527d3e22:2025-09-04:41276
                    
                

Test fail (4/323)

Order failed test Duration
37 TestAddons/parallel/Ingress 164.68
244 TestPreload 174.61
276 TestNoKubernetes/serial/StartNoArgs 67.52
288 TestPause/serial/SecondStartNoReconfiguration 56.32
x
+
TestAddons/parallel/Ingress (164.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-691233 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-691233 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-691233 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [aaaa00f4-fe42-4882-a823-4b1add3972ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [aaaa00f4-fe42-4882-a823-4b1add3972ae] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.004272726s
I0904 05:58:04.956356 1120074 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-691233 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.159386914s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-691233 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.193
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-691233 -n addons-691233
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-691233 logs -n 25: (1.224382455s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-515248                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-515248 │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │ 04 Sep 25 05:53 UTC │
	│ start   │ --download-only -p binary-mirror-795770 --alsologtostderr --binary-mirror http://127.0.0.1:37793 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-795770 │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │                     │
	│ delete  │ -p binary-mirror-795770                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-795770 │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │ 04 Sep 25 05:53 UTC │
	│ addons  │ disable dashboard -p addons-691233                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │                     │
	│ addons  │ enable dashboard -p addons-691233                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │                     │
	│ start   │ -p addons-691233 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ enable headlamp -p addons-691233 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ ip      │ addons-691233 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-691233                                                                                                                                                                                                                                                                                                                                                                                         │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ ssh     │ addons-691233 ssh cat /opt/local-path-provisioner/pvc-e010504f-7da0-4a3a-8765-f897fccbcf3a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ addons  │ addons-691233 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:57 UTC │ 04 Sep 25 05:57 UTC │
	│ ssh     │ addons-691233 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:58 UTC │                     │
	│ addons  │ addons-691233 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:58 UTC │ 04 Sep 25 05:58 UTC │
	│ addons  │ addons-691233 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 05:58 UTC │ 04 Sep 25 05:58 UTC │
	│ ip      │ addons-691233 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-691233        │ jenkins │ v1.36.0 │ 04 Sep 25 06:00 UTC │ 04 Sep 25 06:00 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 05:53:44
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 05:53:44.589878 1120739 out.go:360] Setting OutFile to fd 1 ...
	I0904 05:53:44.590149 1120739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 05:53:44.590159 1120739 out.go:374] Setting ErrFile to fd 2...
	I0904 05:53:44.590164 1120739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 05:53:44.590355 1120739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 05:53:44.591021 1120739 out.go:368] Setting JSON to false
	I0904 05:53:44.591936 1120739 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12968,"bootTime":1756952257,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 05:53:44.592042 1120739 start.go:140] virtualization: kvm guest
	I0904 05:53:44.593841 1120739 out.go:179] * [addons-691233] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 05:53:44.595142 1120739 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 05:53:44.595184 1120739 notify.go:220] Checking for updates...
	I0904 05:53:44.597485 1120739 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 05:53:44.598549 1120739 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 05:53:44.599794 1120739 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 05:53:44.600922 1120739 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 05:53:44.602171 1120739 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 05:53:44.603385 1120739 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 05:53:44.634766 1120739 out.go:179] * Using the kvm2 driver based on user configuration
	I0904 05:53:44.635999 1120739 start.go:304] selected driver: kvm2
	I0904 05:53:44.636011 1120739 start.go:918] validating driver "kvm2" against <nil>
	I0904 05:53:44.636022 1120739 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 05:53:44.636755 1120739 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 05:53:44.636845 1120739 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1115845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 05:53:44.652249 1120739 install.go:137] /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 05:53:44.652319 1120739 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 05:53:44.652609 1120739 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 05:53:44.652647 1120739 cni.go:84] Creating CNI manager for ""
	I0904 05:53:44.652711 1120739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 05:53:44.652722 1120739 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 05:53:44.652802 1120739 start.go:348] cluster config:
	{Name:addons-691233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-691233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0904 05:53:44.652925 1120739 iso.go:125] acquiring lock: {Name:mk8046b526ef8e07e7f8bc343ab464442f664799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 05:53:44.655119 1120739 out.go:179] * Starting "addons-691233" primary control-plane node in "addons-691233" cluster
	I0904 05:53:44.656213 1120739 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 05:53:44.656241 1120739 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 05:53:44.656248 1120739 cache.go:58] Caching tarball of preloaded images
	I0904 05:53:44.656317 1120739 preload.go:172] Found /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 05:53:44.656329 1120739 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 05:53:44.656650 1120739 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/config.json ...
	I0904 05:53:44.656684 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/config.json: {Name:mk7edbe11f0527755946f9b1d4090586117bd826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:53:44.656833 1120739 start.go:360] acquireMachinesLock for addons-691233: {Name:mk3d0e482c06d0ca53afa1318fbdd30ffc2f15b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 05:53:44.656894 1120739 start.go:364] duration metric: took 45.13µs to acquireMachinesLock for "addons-691233"
	I0904 05:53:44.656917 1120739 start.go:93] Provisioning new machine with config: &{Name:addons-691233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:addons-691233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 05:53:44.656985 1120739 start.go:125] createHost starting for "" (driver="kvm2")
	I0904 05:53:44.658528 1120739 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0904 05:53:44.658658 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:53:44.658710 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:53:44.672988 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0904 05:53:44.673449 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:53:44.673965 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:53:44.673990 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:53:44.674385 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:53:44.674594 1120739 main.go:141] libmachine: (addons-691233) Calling .GetMachineName
	I0904 05:53:44.674756 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:53:44.674949 1120739 start.go:159] libmachine.API.Create for "addons-691233" (driver="kvm2")
	I0904 05:53:44.674984 1120739 client.go:168] LocalClient.Create starting
	I0904 05:53:44.675021 1120739 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem
	I0904 05:53:45.022493 1120739 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem
	I0904 05:53:45.683932 1120739 main.go:141] libmachine: Running pre-create checks...
	I0904 05:53:45.683958 1120739 main.go:141] libmachine: (addons-691233) Calling .PreCreateCheck
	I0904 05:53:45.684510 1120739 main.go:141] libmachine: (addons-691233) Calling .GetConfigRaw
	I0904 05:53:45.685060 1120739 main.go:141] libmachine: Creating machine...
	I0904 05:53:45.685078 1120739 main.go:141] libmachine: (addons-691233) Calling .Create
	I0904 05:53:45.685260 1120739 main.go:141] libmachine: (addons-691233) creating KVM machine...
	I0904 05:53:45.685282 1120739 main.go:141] libmachine: (addons-691233) creating network...
	I0904 05:53:45.686774 1120739 main.go:141] libmachine: (addons-691233) DBG | found existing default KVM network
	I0904 05:53:45.687565 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:45.687388 1120761 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136c0}
	I0904 05:53:45.687601 1120739 main.go:141] libmachine: (addons-691233) DBG | created network xml: 
	I0904 05:53:45.687612 1120739 main.go:141] libmachine: (addons-691233) DBG | <network>
	I0904 05:53:45.687620 1120739 main.go:141] libmachine: (addons-691233) DBG |   <name>mk-addons-691233</name>
	I0904 05:53:45.687651 1120739 main.go:141] libmachine: (addons-691233) DBG |   <dns enable='no'/>
	I0904 05:53:45.687674 1120739 main.go:141] libmachine: (addons-691233) DBG |   
	I0904 05:53:45.687726 1120739 main.go:141] libmachine: (addons-691233) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0904 05:53:45.687753 1120739 main.go:141] libmachine: (addons-691233) DBG |     <dhcp>
	I0904 05:53:45.687767 1120739 main.go:141] libmachine: (addons-691233) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0904 05:53:45.687781 1120739 main.go:141] libmachine: (addons-691233) DBG |     </dhcp>
	I0904 05:53:45.687792 1120739 main.go:141] libmachine: (addons-691233) DBG |   </ip>
	I0904 05:53:45.687804 1120739 main.go:141] libmachine: (addons-691233) DBG |   
	I0904 05:53:45.687814 1120739 main.go:141] libmachine: (addons-691233) DBG | </network>
	I0904 05:53:45.687827 1120739 main.go:141] libmachine: (addons-691233) DBG | 
	I0904 05:53:45.692776 1120739 main.go:141] libmachine: (addons-691233) DBG | trying to create private KVM network mk-addons-691233 192.168.39.0/24...
	I0904 05:53:45.757290 1120739 main.go:141] libmachine: (addons-691233) DBG | private KVM network mk-addons-691233 192.168.39.0/24 created
	I0904 05:53:45.757329 1120739 main.go:141] libmachine: (addons-691233) setting up store path in /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233 ...
	I0904 05:53:45.757340 1120739 main.go:141] libmachine: (addons-691233) building disk image from file:///home/jenkins/minikube-integration/21409-1115845/.minikube/cache/iso/amd64/minikube-v1.36.0-1756846819-21409-amd64.iso
	I0904 05:53:45.757351 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:45.757257 1120761 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 05:53:45.757505 1120739 main.go:141] libmachine: (addons-691233) Downloading /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-1115845/.minikube/cache/iso/amd64/minikube-v1.36.0-1756846819-21409-amd64.iso...
	I0904 05:53:46.084021 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:46.083881 1120761 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa...
	I0904 05:53:46.279480 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:46.279272 1120761 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/addons-691233.rawdisk...
	I0904 05:53:46.279522 1120739 main.go:141] libmachine: (addons-691233) DBG | Writing magic tar header
	I0904 05:53:46.279539 1120739 main.go:141] libmachine: (addons-691233) DBG | Writing SSH key tar header
	I0904 05:53:46.279556 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:46.279462 1120761 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233 ...
	I0904 05:53:46.279573 1120739 main.go:141] libmachine: (addons-691233) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233
	I0904 05:53:46.279662 1120739 main.go:141] libmachine: (addons-691233) setting executable bit set on /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233 (perms=drwx------)
	I0904 05:53:46.279684 1120739 main.go:141] libmachine: (addons-691233) setting executable bit set on /home/jenkins/minikube-integration/21409-1115845/.minikube/machines (perms=drwxr-xr-x)
	I0904 05:53:46.279694 1120739 main.go:141] libmachine: (addons-691233) setting executable bit set on /home/jenkins/minikube-integration/21409-1115845/.minikube (perms=drwxr-xr-x)
	I0904 05:53:46.279701 1120739 main.go:141] libmachine: (addons-691233) setting executable bit set on /home/jenkins/minikube-integration/21409-1115845 (perms=drwxrwxr-x)
	I0904 05:53:46.279709 1120739 main.go:141] libmachine: (addons-691233) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0904 05:53:46.279724 1120739 main.go:141] libmachine: (addons-691233) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0904 05:53:46.279733 1120739 main.go:141] libmachine: (addons-691233) creating domain...
	I0904 05:53:46.279862 1120739 main.go:141] libmachine: (addons-691233) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines
	I0904 05:53:46.279916 1120739 main.go:141] libmachine: (addons-691233) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 05:53:46.279952 1120739 main.go:141] libmachine: (addons-691233) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-1115845
	I0904 05:53:46.279976 1120739 main.go:141] libmachine: (addons-691233) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0904 05:53:46.279985 1120739 main.go:141] libmachine: (addons-691233) DBG | checking permissions on dir: /home/jenkins
	I0904 05:53:46.280012 1120739 main.go:141] libmachine: (addons-691233) DBG | checking permissions on dir: /home
	I0904 05:53:46.280027 1120739 main.go:141] libmachine: (addons-691233) DBG | skipping /home - not owner
	I0904 05:53:46.280960 1120739 main.go:141] libmachine: (addons-691233) define libvirt domain using xml: 
	I0904 05:53:46.280981 1120739 main.go:141] libmachine: (addons-691233) <domain type='kvm'>
	I0904 05:53:46.280989 1120739 main.go:141] libmachine: (addons-691233)   <name>addons-691233</name>
	I0904 05:53:46.280997 1120739 main.go:141] libmachine: (addons-691233)   <memory unit='MiB'>4096</memory>
	I0904 05:53:46.281005 1120739 main.go:141] libmachine: (addons-691233)   <vcpu>2</vcpu>
	I0904 05:53:46.281020 1120739 main.go:141] libmachine: (addons-691233)   <features>
	I0904 05:53:46.281030 1120739 main.go:141] libmachine: (addons-691233)     <acpi/>
	I0904 05:53:46.281036 1120739 main.go:141] libmachine: (addons-691233)     <apic/>
	I0904 05:53:46.281043 1120739 main.go:141] libmachine: (addons-691233)     <pae/>
	I0904 05:53:46.281049 1120739 main.go:141] libmachine: (addons-691233)     
	I0904 05:53:46.281074 1120739 main.go:141] libmachine: (addons-691233)   </features>
	I0904 05:53:46.281085 1120739 main.go:141] libmachine: (addons-691233)   <cpu mode='host-passthrough'>
	I0904 05:53:46.281094 1120739 main.go:141] libmachine: (addons-691233)   
	I0904 05:53:46.281102 1120739 main.go:141] libmachine: (addons-691233)   </cpu>
	I0904 05:53:46.281110 1120739 main.go:141] libmachine: (addons-691233)   <os>
	I0904 05:53:46.281114 1120739 main.go:141] libmachine: (addons-691233)     <type>hvm</type>
	I0904 05:53:46.281125 1120739 main.go:141] libmachine: (addons-691233)     <boot dev='cdrom'/>
	I0904 05:53:46.281132 1120739 main.go:141] libmachine: (addons-691233)     <boot dev='hd'/>
	I0904 05:53:46.281142 1120739 main.go:141] libmachine: (addons-691233)     <bootmenu enable='no'/>
	I0904 05:53:46.281148 1120739 main.go:141] libmachine: (addons-691233)   </os>
	I0904 05:53:46.281156 1120739 main.go:141] libmachine: (addons-691233)   <devices>
	I0904 05:53:46.281164 1120739 main.go:141] libmachine: (addons-691233)     <disk type='file' device='cdrom'>
	I0904 05:53:46.281189 1120739 main.go:141] libmachine: (addons-691233)       <source file='/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/boot2docker.iso'/>
	I0904 05:53:46.281202 1120739 main.go:141] libmachine: (addons-691233)       <target dev='hdc' bus='scsi'/>
	I0904 05:53:46.281208 1120739 main.go:141] libmachine: (addons-691233)       <readonly/>
	I0904 05:53:46.281222 1120739 main.go:141] libmachine: (addons-691233)     </disk>
	I0904 05:53:46.281236 1120739 main.go:141] libmachine: (addons-691233)     <disk type='file' device='disk'>
	I0904 05:53:46.281247 1120739 main.go:141] libmachine: (addons-691233)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0904 05:53:46.281260 1120739 main.go:141] libmachine: (addons-691233)       <source file='/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/addons-691233.rawdisk'/>
	I0904 05:53:46.281269 1120739 main.go:141] libmachine: (addons-691233)       <target dev='hda' bus='virtio'/>
	I0904 05:53:46.281277 1120739 main.go:141] libmachine: (addons-691233)     </disk>
	I0904 05:53:46.281286 1120739 main.go:141] libmachine: (addons-691233)     <interface type='network'>
	I0904 05:53:46.281292 1120739 main.go:141] libmachine: (addons-691233)       <source network='mk-addons-691233'/>
	I0904 05:53:46.281301 1120739 main.go:141] libmachine: (addons-691233)       <model type='virtio'/>
	I0904 05:53:46.281310 1120739 main.go:141] libmachine: (addons-691233)     </interface>
	I0904 05:53:46.281321 1120739 main.go:141] libmachine: (addons-691233)     <interface type='network'>
	I0904 05:53:46.281329 1120739 main.go:141] libmachine: (addons-691233)       <source network='default'/>
	I0904 05:53:46.281338 1120739 main.go:141] libmachine: (addons-691233)       <model type='virtio'/>
	I0904 05:53:46.281346 1120739 main.go:141] libmachine: (addons-691233)     </interface>
	I0904 05:53:46.281355 1120739 main.go:141] libmachine: (addons-691233)     <serial type='pty'>
	I0904 05:53:46.281363 1120739 main.go:141] libmachine: (addons-691233)       <target port='0'/>
	I0904 05:53:46.281371 1120739 main.go:141] libmachine: (addons-691233)     </serial>
	I0904 05:53:46.281377 1120739 main.go:141] libmachine: (addons-691233)     <console type='pty'>
	I0904 05:53:46.281394 1120739 main.go:141] libmachine: (addons-691233)       <target type='serial' port='0'/>
	I0904 05:53:46.281414 1120739 main.go:141] libmachine: (addons-691233)     </console>
	I0904 05:53:46.281421 1120739 main.go:141] libmachine: (addons-691233)     <rng model='virtio'>
	I0904 05:53:46.281438 1120739 main.go:141] libmachine: (addons-691233)       <backend model='random'>/dev/random</backend>
	I0904 05:53:46.281449 1120739 main.go:141] libmachine: (addons-691233)     </rng>
	I0904 05:53:46.281457 1120739 main.go:141] libmachine: (addons-691233)     
	I0904 05:53:46.281462 1120739 main.go:141] libmachine: (addons-691233)     
	I0904 05:53:46.281470 1120739 main.go:141] libmachine: (addons-691233)   </devices>
	I0904 05:53:46.281479 1120739 main.go:141] libmachine: (addons-691233) </domain>
	I0904 05:53:46.281489 1120739 main.go:141] libmachine: (addons-691233) 
	I0904 05:53:46.287068 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:7b:94:72 in network default
	I0904 05:53:46.287708 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:46.287729 1120739 main.go:141] libmachine: (addons-691233) starting domain...
	I0904 05:53:46.287742 1120739 main.go:141] libmachine: (addons-691233) ensuring networks are active...
	I0904 05:53:46.288392 1120739 main.go:141] libmachine: (addons-691233) Ensuring network default is active
	I0904 05:53:46.288674 1120739 main.go:141] libmachine: (addons-691233) Ensuring network mk-addons-691233 is active
	I0904 05:53:46.289190 1120739 main.go:141] libmachine: (addons-691233) getting domain XML...
	I0904 05:53:46.289953 1120739 main.go:141] libmachine: (addons-691233) creating domain...
	I0904 05:53:47.656305 1120739 main.go:141] libmachine: (addons-691233) waiting for IP...
	I0904 05:53:47.657274 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:47.657748 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:47.657800 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:47.657734 1120761 retry.go:31] will retry after 265.457483ms: waiting for domain to come up
	I0904 05:53:47.925143 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:47.925525 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:47.925564 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:47.925505 1120761 retry.go:31] will retry after 297.901512ms: waiting for domain to come up
	I0904 05:53:48.225030 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:48.225537 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:48.225562 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:48.225479 1120761 retry.go:31] will retry after 481.13899ms: waiting for domain to come up
	I0904 05:53:48.708005 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:48.708444 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:48.708512 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:48.708428 1120761 retry.go:31] will retry after 451.694242ms: waiting for domain to come up
	I0904 05:53:49.161978 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:49.162489 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:49.162518 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:49.162460 1120761 retry.go:31] will retry after 593.381146ms: waiting for domain to come up
	I0904 05:53:49.758025 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:49.758615 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:49.758671 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:49.758550 1120761 retry.go:31] will retry after 809.038359ms: waiting for domain to come up
	I0904 05:53:50.569324 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:50.569728 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:50.569759 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:50.569692 1120761 retry.go:31] will retry after 755.46305ms: waiting for domain to come up
	I0904 05:53:51.326701 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:51.327147 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:51.327179 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:51.327080 1120761 retry.go:31] will retry after 1.174747431s: waiting for domain to come up
	I0904 05:53:52.503545 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:52.504021 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:52.504045 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:52.503991 1120761 retry.go:31] will retry after 1.307900727s: waiting for domain to come up
	I0904 05:53:53.813574 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:53.814032 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:53.814063 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:53.813989 1120761 retry.go:31] will retry after 1.599626414s: waiting for domain to come up
	I0904 05:53:55.415760 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:55.416288 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:55.416344 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:55.416275 1120761 retry.go:31] will retry after 2.573809828s: waiting for domain to come up
	I0904 05:53:57.992951 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:53:57.993415 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:53:57.993435 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:53:57.993379 1120761 retry.go:31] will retry after 3.125491596s: waiting for domain to come up
	I0904 05:54:01.121370 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:01.121870 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:54:01.121896 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:54:01.121837 1120761 retry.go:31] will retry after 3.53097308s: waiting for domain to come up
	I0904 05:54:04.656523 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:04.656947 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find current IP address of domain addons-691233 in network mk-addons-691233
	I0904 05:54:04.656977 1120739 main.go:141] libmachine: (addons-691233) DBG | I0904 05:54:04.656907 1120761 retry.go:31] will retry after 4.005703956s: waiting for domain to come up
	I0904 05:54:08.667483 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.667887 1120739 main.go:141] libmachine: (addons-691233) found domain IP: 192.168.39.193
	I0904 05:54:08.667924 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has current primary IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.667940 1120739 main.go:141] libmachine: (addons-691233) reserving static IP address...
	I0904 05:54:08.668264 1120739 main.go:141] libmachine: (addons-691233) DBG | unable to find host DHCP lease matching {name: "addons-691233", mac: "52:54:00:45:5e:02", ip: "192.168.39.193"} in network mk-addons-691233
	I0904 05:54:08.741694 1120739 main.go:141] libmachine: (addons-691233) DBG | Getting to WaitForSSH function...
	I0904 05:54:08.741725 1120739 main.go:141] libmachine: (addons-691233) reserved static IP address 192.168.39.193 for domain addons-691233
	I0904 05:54:08.741738 1120739 main.go:141] libmachine: (addons-691233) waiting for SSH...
	I0904 05:54:08.744110 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.744539 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:08.744568 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.744720 1120739 main.go:141] libmachine: (addons-691233) DBG | Using SSH client type: external
	I0904 05:54:08.744748 1120739 main.go:141] libmachine: (addons-691233) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa (-rw-------)
	I0904 05:54:08.744782 1120739 main.go:141] libmachine: (addons-691233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0904 05:54:08.744794 1120739 main.go:141] libmachine: (addons-691233) DBG | About to run SSH command:
	I0904 05:54:08.744806 1120739 main.go:141] libmachine: (addons-691233) DBG | exit 0
	I0904 05:54:08.870994 1120739 main.go:141] libmachine: (addons-691233) DBG | SSH cmd err, output: <nil>: 
	I0904 05:54:08.871275 1120739 main.go:141] libmachine: (addons-691233) KVM machine creation complete
	I0904 05:54:08.871578 1120739 main.go:141] libmachine: (addons-691233) Calling .GetConfigRaw
	I0904 05:54:08.872139 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:08.872326 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:08.872484 1120739 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0904 05:54:08.872500 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:08.873631 1120739 main.go:141] libmachine: Detecting operating system of created instance...
	I0904 05:54:08.873649 1120739 main.go:141] libmachine: Waiting for SSH to be available...
	I0904 05:54:08.873656 1120739 main.go:141] libmachine: Getting to WaitForSSH function...
	I0904 05:54:08.873664 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:08.876009 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.876506 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:08.876536 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.876710 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:08.876899 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:08.877041 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:08.877163 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:08.877303 1120739 main.go:141] libmachine: Using SSH client type: native
	I0904 05:54:08.877603 1120739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0904 05:54:08.877618 1120739 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0904 05:54:08.986412 1120739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 05:54:08.986454 1120739 main.go:141] libmachine: Detecting the provisioner...
	I0904 05:54:08.986462 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:08.989313 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.989679 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:08.989716 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:08.989898 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:08.990161 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:08.990358 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:08.990488 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:08.990667 1120739 main.go:141] libmachine: Using SSH client type: native
	I0904 05:54:08.990908 1120739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0904 05:54:08.990926 1120739 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0904 05:54:09.104273 1120739 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0904 05:54:09.104386 1120739 main.go:141] libmachine: found compatible host: buildroot
	I0904 05:54:09.104400 1120739 main.go:141] libmachine: Provisioning with buildroot...
	I0904 05:54:09.104412 1120739 main.go:141] libmachine: (addons-691233) Calling .GetMachineName
	I0904 05:54:09.104690 1120739 buildroot.go:166] provisioning hostname "addons-691233"
	I0904 05:54:09.104718 1120739 main.go:141] libmachine: (addons-691233) Calling .GetMachineName
	I0904 05:54:09.104944 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:09.107851 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.108228 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.108261 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.108380 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:09.108582 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.108737 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.108885 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:09.109058 1120739 main.go:141] libmachine: Using SSH client type: native
	I0904 05:54:09.109285 1120739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0904 05:54:09.109297 1120739 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-691233 && echo "addons-691233" | sudo tee /etc/hostname
	I0904 05:54:09.230600 1120739 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-691233
	
	I0904 05:54:09.230644 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:09.233425 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.233777 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.233808 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.234001 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:09.234183 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.234324 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.234434 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:09.234561 1120739 main.go:141] libmachine: Using SSH client type: native
	I0904 05:54:09.234767 1120739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0904 05:54:09.234787 1120739 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-691233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-691233/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-691233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 05:54:09.352203 1120739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 05:54:09.352237 1120739 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1115845/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1115845/.minikube}
	I0904 05:54:09.352281 1120739 buildroot.go:174] setting up certificates
	I0904 05:54:09.352295 1120739 provision.go:84] configureAuth start
	I0904 05:54:09.352307 1120739 main.go:141] libmachine: (addons-691233) Calling .GetMachineName
	I0904 05:54:09.352674 1120739 main.go:141] libmachine: (addons-691233) Calling .GetIP
	I0904 05:54:09.355396 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.355781 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.355802 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.355959 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:09.357853 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.358154 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.358197 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.358253 1120739 provision.go:143] copyHostCerts
	I0904 05:54:09.358359 1120739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem (1679 bytes)
	I0904 05:54:09.358535 1120739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem (1082 bytes)
	I0904 05:54:09.358652 1120739 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem (1123 bytes)
	I0904 05:54:09.358787 1120739 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem org=jenkins.addons-691233 san=[127.0.0.1 192.168.39.193 addons-691233 localhost minikube]
	I0904 05:54:09.523563 1120739 provision.go:177] copyRemoteCerts
	I0904 05:54:09.523629 1120739 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 05:54:09.523655 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:09.526250 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.526567 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.526598 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.526806 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:09.527008 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.527183 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:09.527313 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:09.614319 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 05:54:09.640508 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 05:54:09.667672 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 05:54:09.693570 1120739 provision.go:87] duration metric: took 341.259906ms to configureAuth
	I0904 05:54:09.693603 1120739 buildroot.go:189] setting minikube options for container-runtime
	I0904 05:54:09.693842 1120739 config.go:182] Loaded profile config "addons-691233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 05:54:09.693965 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:09.696510 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.696912 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.696954 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.697082 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:09.697314 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.697465 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.697614 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:09.697736 1120739 main.go:141] libmachine: Using SSH client type: native
	I0904 05:54:09.697951 1120739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0904 05:54:09.697964 1120739 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 05:54:09.916276 1120739 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 05:54:09.916308 1120739 main.go:141] libmachine: Checking connection to Docker...
	I0904 05:54:09.916321 1120739 main.go:141] libmachine: (addons-691233) Calling .GetURL
	I0904 05:54:09.917577 1120739 main.go:141] libmachine: (addons-691233) DBG | using libvirt version 6000000
	I0904 05:54:09.919874 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.920181 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.920207 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.920330 1120739 main.go:141] libmachine: Docker is up and running!
	I0904 05:54:09.920342 1120739 main.go:141] libmachine: Reticulating splines...
	I0904 05:54:09.920351 1120739 client.go:171] duration metric: took 25.245355406s to LocalClient.Create
	I0904 05:54:09.920380 1120739 start.go:167] duration metric: took 25.245431715s to libmachine.API.Create "addons-691233"
	I0904 05:54:09.920394 1120739 start.go:293] postStartSetup for "addons-691233" (driver="kvm2")
	I0904 05:54:09.920409 1120739 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 05:54:09.920435 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:09.920657 1120739 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 05:54:09.920692 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:09.922508 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.922811 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:09.922853 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:09.922958 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:09.923116 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:09.923228 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:09.923387 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:10.006704 1120739 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 05:54:10.011176 1120739 info.go:137] Remote host: Buildroot 2025.02
	I0904 05:54:10.011214 1120739 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/addons for local assets ...
	I0904 05:54:10.011288 1120739 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/files for local assets ...
	I0904 05:54:10.011314 1120739 start.go:296] duration metric: took 90.913112ms for postStartSetup
	I0904 05:54:10.011367 1120739 main.go:141] libmachine: (addons-691233) Calling .GetConfigRaw
	I0904 05:54:10.011962 1120739 main.go:141] libmachine: (addons-691233) Calling .GetIP
	I0904 05:54:10.014521 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.014846 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:10.014879 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.015086 1120739 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/config.json ...
	I0904 05:54:10.015265 1120739 start.go:128] duration metric: took 25.358268839s to createHost
	I0904 05:54:10.015293 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:10.017975 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.018801 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:10.018843 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.018976 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:10.019164 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:10.019296 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:10.019407 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:10.019538 1120739 main.go:141] libmachine: Using SSH client type: native
	I0904 05:54:10.019811 1120739 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0904 05:54:10.019824 1120739 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 05:54:10.132139 1120739 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756965250.109178520
	
	I0904 05:54:10.132166 1120739 fix.go:216] guest clock: 1756965250.109178520
	I0904 05:54:10.132174 1120739 fix.go:229] Guest: 2025-09-04 05:54:10.10917852 +0000 UTC Remote: 2025-09-04 05:54:10.015278478 +0000 UTC m=+25.461829307 (delta=93.900042ms)
	I0904 05:54:10.132216 1120739 fix.go:200] guest clock delta is within tolerance: 93.900042ms
	I0904 05:54:10.132221 1120739 start.go:83] releasing machines lock for "addons-691233", held for 25.475315187s
	I0904 05:54:10.132249 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:10.132526 1120739 main.go:141] libmachine: (addons-691233) Calling .GetIP
	I0904 05:54:10.135207 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.135568 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:10.135606 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.135721 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:10.136198 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:10.136387 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:10.136506 1120739 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 05:54:10.136565 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:10.136600 1120739 ssh_runner.go:195] Run: cat /version.json
	I0904 05:54:10.136630 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:10.139178 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.139443 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.139477 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:10.139503 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.139629 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:10.139801 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:10.139878 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:10.139902 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:10.139976 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:10.140044 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:10.140224 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:10.140263 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:10.140376 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:10.140541 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:10.251868 1120739 ssh_runner.go:195] Run: systemctl --version
	I0904 05:54:10.257527 1120739 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 05:54:10.409661 1120739 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 05:54:10.416617 1120739 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 05:54:10.416681 1120739 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 05:54:10.434512 1120739 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 05:54:10.434542 1120739 start.go:495] detecting cgroup driver to use...
	I0904 05:54:10.434624 1120739 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 05:54:10.453645 1120739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 05:54:10.469908 1120739 docker.go:218] disabling cri-docker service (if available) ...
	I0904 05:54:10.469969 1120739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 05:54:10.485541 1120739 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 05:54:10.501470 1120739 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 05:54:10.640151 1120739 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 05:54:10.776202 1120739 docker.go:234] disabling docker service ...
	I0904 05:54:10.776348 1120739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 05:54:10.791402 1120739 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 05:54:10.804918 1120739 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 05:54:11.010667 1120739 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 05:54:11.147919 1120739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 05:54:11.163511 1120739 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 05:54:11.183634 1120739 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 05:54:11.183696 1120739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 05:54:11.194673 1120739 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 05:54:11.194737 1120739 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 05:54:11.205715 1120739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 05:54:11.216505 1120739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 05:54:11.227121 1120739 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 05:54:11.238293 1120739 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 05:54:11.248846 1120739 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 05:54:11.266851 1120739 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 05:54:11.277633 1120739 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 05:54:11.286743 1120739 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 05:54:11.286800 1120739 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 05:54:11.305426 1120739 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 05:54:11.315474 1120739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 05:54:11.448171 1120739 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 05:54:11.550188 1120739 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 05:54:11.550292 1120739 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 05:54:11.555131 1120739 start.go:563] Will wait 60s for crictl version
	I0904 05:54:11.555198 1120739 ssh_runner.go:195] Run: which crictl
	I0904 05:54:11.559078 1120739 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 05:54:11.597592 1120739 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 05:54:11.597721 1120739 ssh_runner.go:195] Run: crio --version
	I0904 05:54:11.624470 1120739 ssh_runner.go:195] Run: crio --version
	I0904 05:54:11.653419 1120739 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 05:54:11.654547 1120739 main.go:141] libmachine: (addons-691233) Calling .GetIP
	I0904 05:54:11.657086 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:11.657535 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:11.657585 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:11.657782 1120739 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 05:54:11.661923 1120739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 05:54:11.675672 1120739 kubeadm.go:875] updating cluster {Name:addons-691233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:addons-691233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 05:54:11.675793 1120739 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 05:54:11.675850 1120739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 05:54:11.707925 1120739 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0904 05:54:11.708019 1120739 ssh_runner.go:195] Run: which lz4
	I0904 05:54:11.712022 1120739 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 05:54:11.716460 1120739 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 05:54:11.716489 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0904 05:54:13.002384 1120739 crio.go:462] duration metric: took 1.290391512s to copy over tarball
	I0904 05:54:13.002471 1120739 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 05:54:14.574528 1120739 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.572020653s)
	I0904 05:54:14.574570 1120739 crio.go:469] duration metric: took 1.572152137s to extract the tarball
	I0904 05:54:14.574581 1120739 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 05:54:14.615099 1120739 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 05:54:14.659531 1120739 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 05:54:14.659554 1120739 cache_images.go:85] Images are preloaded, skipping loading
	I0904 05:54:14.659561 1120739 kubeadm.go:926] updating node { 192.168.39.193 8443 v1.34.0 crio true true} ...
	I0904 05:54:14.659685 1120739 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-691233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-691233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 05:54:14.659765 1120739 ssh_runner.go:195] Run: crio config
	I0904 05:54:14.703134 1120739 cni.go:84] Creating CNI manager for ""
	I0904 05:54:14.703162 1120739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 05:54:14.703176 1120739 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 05:54:14.703205 1120739 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-691233 NodeName:addons-691233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 05:54:14.703367 1120739 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-691233"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 05:54:14.703433 1120739 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 05:54:14.714657 1120739 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 05:54:14.714725 1120739 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 05:54:14.725962 1120739 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0904 05:54:14.745042 1120739 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 05:54:14.763906 1120739 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0904 05:54:14.782717 1120739 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I0904 05:54:14.786457 1120739 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 05:54:14.799541 1120739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 05:54:14.934142 1120739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 05:54:14.953188 1120739 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233 for IP: 192.168.39.193
	I0904 05:54:14.953234 1120739 certs.go:194] generating shared ca certs ...
	I0904 05:54:14.953259 1120739 certs.go:226] acquiring lock for ca certs: {Name:mkb48abb711128619cd278e65e40c326a6b20d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:14.953459 1120739 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key
	I0904 05:54:15.049515 1120739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt ...
	I0904 05:54:15.049554 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt: {Name:mk2be74acc6707c86be9a95de72abb5d1a40e4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.049725 1120739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key ...
	I0904 05:54:15.049743 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key: {Name:mk7dc1e69dc596d07f990860ac98c1d207b1e58f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.049822 1120739 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key
	I0904 05:54:15.203475 1120739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt ...
	I0904 05:54:15.203507 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt: {Name:mk8498426530c9163b0ccd6ae8db933ef665c87d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.203666 1120739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key ...
	I0904 05:54:15.203677 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key: {Name:mke2f076b2c622be1c1ba54bc88bccd578a9d690 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.203744 1120739 certs.go:256] generating profile certs ...
	I0904 05:54:15.203802 1120739 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.key
	I0904 05:54:15.203816 1120739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt with IP's: []
	I0904 05:54:15.287422 1120739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt ...
	I0904 05:54:15.287457 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: {Name:mk218da20bba71b7702cce94ce867fa09d3fc325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.287635 1120739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.key ...
	I0904 05:54:15.287652 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.key: {Name:mk9c675b0bc48f627804ef360f76c6ee0909c367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.287719 1120739 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.key.fdb5bebd
	I0904 05:54:15.287738 1120739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.crt.fdb5bebd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.193]
	I0904 05:54:15.329662 1120739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.crt.fdb5bebd ...
	I0904 05:54:15.329691 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.crt.fdb5bebd: {Name:mk1dac973f7e2d2165e99163b9bc4d828dc9a763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.329834 1120739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.key.fdb5bebd ...
	I0904 05:54:15.329846 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.key.fdb5bebd: {Name:mkf3aaf7c1df4769af9bfe28530719d85822de35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.329911 1120739 certs.go:381] copying /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.crt.fdb5bebd -> /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.crt
	I0904 05:54:15.330029 1120739 certs.go:385] copying /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.key.fdb5bebd -> /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.key
	I0904 05:54:15.330078 1120739 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.key
	I0904 05:54:15.330098 1120739 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.crt with IP's: []
	I0904 05:54:15.653844 1120739 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.crt ...
	I0904 05:54:15.653883 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.crt: {Name:mk90916d5aeff8ba415aae7510985b4e947d07a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.654088 1120739 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.key ...
	I0904 05:54:15.654109 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.key: {Name:mkdccdd2879b660d5ddaf360a9941c15fba1a522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:15.654317 1120739 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 05:54:15.654367 1120739 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem (1082 bytes)
	I0904 05:54:15.654406 1120739 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem (1123 bytes)
	I0904 05:54:15.654439 1120739 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem (1679 bytes)
	I0904 05:54:15.655244 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 05:54:15.691996 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 05:54:15.728539 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 05:54:15.755153 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 05:54:15.781104 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 05:54:15.806715 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 05:54:15.832762 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 05:54:15.858570 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 05:54:15.883539 1120739 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 05:54:15.909975 1120739 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 05:54:15.928231 1120739 ssh_runner.go:195] Run: openssl version
	I0904 05:54:15.934194 1120739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 05:54:15.945759 1120739 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 05:54:15.950340 1120739 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 05:54 /usr/share/ca-certificates/minikubeCA.pem
	I0904 05:54:15.950401 1120739 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 05:54:15.957129 1120739 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 05:54:15.969243 1120739 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 05:54:15.973489 1120739 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 05:54:15.973557 1120739 kubeadm.go:392] StartCluster: {Name:addons-691233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 C
lusterName:addons-691233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 05:54:15.973630 1120739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 05:54:15.973712 1120739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 05:54:16.010770 1120739 cri.go:89] found id: ""
	I0904 05:54:16.010857 1120739 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 05:54:16.022895 1120739 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 05:54:16.034168 1120739 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 05:54:16.045295 1120739 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 05:54:16.045312 1120739 kubeadm.go:157] found existing configuration files:
	
	I0904 05:54:16.045349 1120739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 05:54:16.055852 1120739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 05:54:16.055896 1120739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 05:54:16.066097 1120739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 05:54:16.075573 1120739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 05:54:16.075615 1120739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 05:54:16.086473 1120739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 05:54:16.096351 1120739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 05:54:16.096403 1120739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 05:54:16.106434 1120739 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 05:54:16.115731 1120739 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 05:54:16.115795 1120739 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 05:54:16.126610 1120739 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 05:54:16.176021 1120739 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 05:54:16.176127 1120739 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 05:54:16.270364 1120739 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 05:54:16.270534 1120739 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 05:54:16.270674 1120739 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 05:54:16.280061 1120739 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 05:54:16.307831 1120739 out.go:252]   - Generating certificates and keys ...
	I0904 05:54:16.307972 1120739 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 05:54:16.308060 1120739 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 05:54:16.370200 1120739 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 05:54:16.582950 1120739 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 05:54:16.775073 1120739 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 05:54:17.211197 1120739 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 05:54:17.252484 1120739 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 05:54:17.252651 1120739 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-691233 localhost] and IPs [192.168.39.193 127.0.0.1 ::1]
	I0904 05:54:17.838071 1120739 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 05:54:17.838246 1120739 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-691233 localhost] and IPs [192.168.39.193 127.0.0.1 ::1]
	I0904 05:54:18.531447 1120739 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 05:54:18.845725 1120739 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 05:54:18.969292 1120739 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 05:54:18.969407 1120739 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 05:54:19.154574 1120739 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 05:54:19.628299 1120739 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 05:54:19.862214 1120739 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 05:54:20.051246 1120739 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 05:54:20.263298 1120739 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 05:54:20.263812 1120739 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 05:54:20.265966 1120739 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 05:54:20.267705 1120739 out.go:252]   - Booting up control plane ...
	I0904 05:54:20.267792 1120739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 05:54:20.267884 1120739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 05:54:20.267984 1120739 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 05:54:20.283843 1120739 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 05:54:20.283951 1120739 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 05:54:20.290571 1120739 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 05:54:20.290699 1120739 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 05:54:20.290749 1120739 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 05:54:20.454421 1120739 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 05:54:20.454570 1120739 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 05:54:20.955585 1120739 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470832ms
	I0904 05:54:20.961112 1120739 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 05:54:20.961205 1120739 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.193:8443/livez
	I0904 05:54:20.961303 1120739 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 05:54:20.961391 1120739 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 05:54:24.146581 1120739 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.187017817s
	I0904 05:54:24.574989 1120739 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.615727396s
	I0904 05:54:26.460476 1120739 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.502089256s
	I0904 05:54:26.472762 1120739 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 05:54:26.487019 1120739 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 05:54:26.499234 1120739 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 05:54:26.499512 1120739 kubeadm.go:310] [mark-control-plane] Marking the node addons-691233 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 05:54:26.509532 1120739 kubeadm.go:310] [bootstrap-token] Using token: 5q1xvk.xc6t499hynqsn5tu
	I0904 05:54:26.510683 1120739 out.go:252]   - Configuring RBAC rules ...
	I0904 05:54:26.510788 1120739 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 05:54:26.521697 1120739 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 05:54:26.531055 1120739 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 05:54:26.537984 1120739 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 05:54:26.541203 1120739 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 05:54:26.544483 1120739 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 05:54:26.871062 1120739 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 05:54:27.303637 1120739 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 05:54:27.867078 1120739 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 05:54:27.868013 1120739 kubeadm.go:310] 
	I0904 05:54:27.868118 1120739 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 05:54:27.868130 1120739 kubeadm.go:310] 
	I0904 05:54:27.868221 1120739 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 05:54:27.868239 1120739 kubeadm.go:310] 
	I0904 05:54:27.868279 1120739 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 05:54:27.868369 1120739 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 05:54:27.868471 1120739 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 05:54:27.868481 1120739 kubeadm.go:310] 
	I0904 05:54:27.868555 1120739 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 05:54:27.868565 1120739 kubeadm.go:310] 
	I0904 05:54:27.868636 1120739 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 05:54:27.868644 1120739 kubeadm.go:310] 
	I0904 05:54:27.868715 1120739 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 05:54:27.868815 1120739 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 05:54:27.868907 1120739 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 05:54:27.868931 1120739 kubeadm.go:310] 
	I0904 05:54:27.869031 1120739 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 05:54:27.869133 1120739 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 05:54:27.869143 1120739 kubeadm.go:310] 
	I0904 05:54:27.869218 1120739 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5q1xvk.xc6t499hynqsn5tu \
	I0904 05:54:27.869319 1120739 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2651308ab51fc83fc020f40c2b31f227a6667a51808f73ed273560ac054e9c36 \
	I0904 05:54:27.869350 1120739 kubeadm.go:310] 	--control-plane 
	I0904 05:54:27.869367 1120739 kubeadm.go:310] 
	I0904 05:54:27.869482 1120739 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 05:54:27.869491 1120739 kubeadm.go:310] 
	I0904 05:54:27.869575 1120739 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5q1xvk.xc6t499hynqsn5tu \
	I0904 05:54:27.869699 1120739 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2651308ab51fc83fc020f40c2b31f227a6667a51808f73ed273560ac054e9c36 
	I0904 05:54:27.871769 1120739 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 05:54:27.871802 1120739 cni.go:84] Creating CNI manager for ""
	I0904 05:54:27.871817 1120739 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 05:54:27.873436 1120739 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 05:54:27.874418 1120739 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 05:54:27.887985 1120739 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 05:54:27.909264 1120739 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 05:54:27.909346 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:27.909406 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-691233 minikube.k8s.io/updated_at=2025_09_04T05_54_27_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff minikube.k8s.io/name=addons-691233 minikube.k8s.io/primary=true
	I0904 05:54:27.949219 1120739 ops.go:34] apiserver oom_adj: -16
	I0904 05:54:28.045434 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:28.546156 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:29.046500 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:29.545649 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:30.046017 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:30.545632 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:31.045609 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:31.546170 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:32.045950 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:32.545733 1120739 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 05:54:32.621144 1120739 kubeadm.go:1105] duration metric: took 4.711863254s to wait for elevateKubeSystemPrivileges
	I0904 05:54:32.621192 1120739 kubeadm.go:394] duration metric: took 16.647639146s to StartCluster
	I0904 05:54:32.621218 1120739 settings.go:142] acquiring lock: {Name:mkb015a02541f006ebfff677085f6c9619eaacb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:32.621364 1120739 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 05:54:32.621764 1120739 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/kubeconfig: {Name:mk586aba4eac8031d07aaf208d256e06f68e9260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 05:54:32.621999 1120739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 05:54:32.622027 1120739 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 05:54:32.622091 1120739 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 05:54:32.622199 1120739 addons.go:69] Setting yakd=true in profile "addons-691233"
	I0904 05:54:32.622230 1120739 addons.go:69] Setting inspektor-gadget=true in profile "addons-691233"
	I0904 05:54:32.622260 1120739 addons.go:69] Setting storage-provisioner=true in profile "addons-691233"
	I0904 05:54:32.622268 1120739 addons.go:238] Setting addon inspektor-gadget=true in "addons-691233"
	I0904 05:54:32.622272 1120739 addons.go:238] Setting addon storage-provisioner=true in "addons-691233"
	I0904 05:54:32.622279 1120739 addons.go:69] Setting registry-creds=true in profile "addons-691233"
	I0904 05:54:32.622309 1120739 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-691233"
	I0904 05:54:32.622311 1120739 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-691233"
	I0904 05:54:32.622311 1120739 config.go:182] Loaded profile config "addons-691233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 05:54:32.622323 1120739 addons.go:238] Setting addon registry-creds=true in "addons-691233"
	I0904 05:54:32.622328 1120739 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-691233"
	I0904 05:54:32.622319 1120739 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-691233"
	I0904 05:54:32.622341 1120739 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-691233"
	I0904 05:54:32.622361 1120739 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-691233"
	I0904 05:54:32.622367 1120739 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-691233"
	I0904 05:54:32.622376 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622319 1120739 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-691233"
	I0904 05:54:32.622388 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622352 1120739 addons.go:69] Setting metrics-server=true in profile "addons-691233"
	I0904 05:54:32.622404 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622410 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622429 1120739 addons.go:238] Setting addon metrics-server=true in "addons-691233"
	I0904 05:54:32.622499 1120739 addons.go:69] Setting volcano=true in profile "addons-691233"
	I0904 05:54:32.622510 1120739 addons.go:238] Setting addon volcano=true in "addons-691233"
	I0904 05:54:32.622515 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622532 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622813 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.622827 1120739 addons.go:69] Setting volumesnapshots=true in profile "addons-691233"
	I0904 05:54:32.622859 1120739 addons.go:238] Setting addon volumesnapshots=true in "addons-691233"
	I0904 05:54:32.622867 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.622879 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622885 1120739 addons.go:69] Setting gcp-auth=true in profile "addons-691233"
	I0904 05:54:32.622894 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.622902 1120739 mustload.go:65] Loading cluster: addons-691233
	I0904 05:54:32.622900 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.622930 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.622934 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.622958 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.623057 1120739 config.go:182] Loaded profile config "addons-691233": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 05:54:32.623114 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.623173 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.623254 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.623295 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.623382 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.623404 1120739 addons.go:69] Setting registry=true in profile "addons-691233"
	I0904 05:54:32.623412 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.623423 1120739 addons.go:238] Setting addon registry=true in "addons-691233"
	I0904 05:54:32.622878 1120739 addons.go:69] Setting default-storageclass=true in profile "addons-691233"
	I0904 05:54:32.623466 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.622303 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.623495 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.623472 1120739 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-691233"
	I0904 05:54:32.622323 1120739 addons.go:69] Setting cloud-spanner=true in profile "addons-691233"
	I0904 05:54:32.623849 1120739 addons.go:238] Setting addon cloud-spanner=true in "addons-691233"
	I0904 05:54:32.623873 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.623905 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.623927 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.624098 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.624135 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.624216 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.624245 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.624603 1120739 out.go:179] * Verifying Kubernetes components...
	I0904 05:54:32.626073 1120739 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 05:54:32.622303 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.626676 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.626707 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.623447 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.622871 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.622252 1120739 addons.go:238] Setting addon yakd=true in "addons-691233"
	I0904 05:54:32.630717 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.630825 1120739 addons.go:69] Setting ingress-dns=true in profile "addons-691233"
	I0904 05:54:32.630867 1120739 addons.go:238] Setting addon ingress-dns=true in "addons-691233"
	I0904 05:54:32.630905 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.631161 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.631191 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.631293 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.631339 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.641340 1120739 addons.go:69] Setting ingress=true in profile "addons-691233"
	I0904 05:54:32.641376 1120739 addons.go:238] Setting addon ingress=true in "addons-691233"
	I0904 05:54:32.641435 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.644604 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42599
	I0904 05:54:32.646496 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
	I0904 05:54:32.659015 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43757
	I0904 05:54:32.659021 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0904 05:54:32.659122 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.659146 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.659374 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.659431 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.659685 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.659727 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.661520 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0904 05:54:32.661529 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0904 05:54:32.661771 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0904 05:54:32.662022 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.662151 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.662205 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.662254 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.662297 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.663049 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.663070 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.663165 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.663249 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.663404 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.663416 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.663546 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.663558 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.663618 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.663758 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.663771 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.663877 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.663888 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.663935 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.664346 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.664382 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.665397 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.665416 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.665484 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.665889 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.665922 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.667108 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.667133 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.667207 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.667259 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.667710 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.667756 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.668329 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.668358 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.668404 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.668734 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.668771 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.671677 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.671724 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.671793 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.672046 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.674046 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.683261 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36733
	I0904 05:54:32.683871 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.684744 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.684766 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.685190 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.685773 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.685826 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.697848 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0904 05:54:32.697858 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I0904 05:54:32.697971 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0904 05:54:32.698420 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.698594 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.698688 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.698866 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
	I0904 05:54:32.699141 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.699158 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.699454 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.699619 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.699633 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.699644 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.699619 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.699678 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.700006 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.700159 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.700171 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.700253 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.700283 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.700542 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.700582 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.700941 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.701004 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.701173 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.701609 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.701669 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.703417 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.703704 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0904 05:54:32.704189 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.704552 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.704574 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.704950 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.705326 1120739 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0904 05:54:32.705471 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.705511 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.706250 1120739 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 05:54:32.706269 1120739 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 05:54:32.706290 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.708976 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0904 05:54:32.709392 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.709458 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.710073 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.710093 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.710178 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.710200 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.710385 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.710541 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.710690 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.710814 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.710902 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.711097 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.711726 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0904 05:54:32.712262 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0904 05:54:32.713007 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0904 05:54:32.720682 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.720794 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.720800 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.720886 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.721048 1120739 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-691233"
	I0904 05:54:32.721098 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.721419 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.721436 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.721459 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.721493 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.721563 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.721573 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.721676 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.721934 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.722155 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.722380 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.722513 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.722551 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.722562 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.722586 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.723051 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.723694 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.723734 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.724783 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I0904 05:54:32.725522 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.727094 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.727347 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:32.727364 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:32.727770 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:32.727809 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:32.727829 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:32.727838 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:32.727844 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:32.728168 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:32.728194 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:32.728212 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	W0904 05:54:32.728300 1120739 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0904 05:54:32.728922 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0904 05:54:32.729514 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.729933 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.729949 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.730383 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.730568 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.731373 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.731388 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.731854 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.732175 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.732587 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.732623 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.733091 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0904 05:54:32.733621 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.733696 1120739 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0904 05:54:32.734144 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.734160 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.734518 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.734698 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.735583 1120739 out.go:179]   - Using image docker.io/registry:3.0.0
	I0904 05:54:32.736243 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.736688 1120739 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 05:54:32.736706 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 05:54:32.736727 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.737567 1120739 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0904 05:54:32.738060 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0904 05:54:32.738576 1120739 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 05:54:32.738594 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0904 05:54:32.738612 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.739355 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.740087 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.740105 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.740325 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.740816 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.740848 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.741041 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.741110 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.741449 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.741501 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.741640 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.741812 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.743496 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.743531 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.743988 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.744011 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.744200 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.744420 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.744471 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0904 05:54:32.744676 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.744826 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.745064 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 05:54:32.745808 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.746157 1120739 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 05:54:32.746175 1120739 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 05:54:32.746195 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.746385 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.746407 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.746740 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0904 05:54:32.746970 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I0904 05:54:32.747085 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.747366 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.748489 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.749183 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.749211 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.749277 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
	I0904 05:54:32.749552 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.749732 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.750306 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.750386 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.750542 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.750892 1120739 addons.go:238] Setting addon default-storageclass=true in "addons-691233"
	I0904 05:54:32.750938 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:32.751342 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.751376 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.751377 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.751426 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.751850 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.751880 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.752069 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.752255 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.752323 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.752664 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.752720 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.753057 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.754293 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.754315 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.754516 1120739 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0904 05:54:32.755027 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.755358 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.755799 1120739 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 05:54:32.755815 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0904 05:54:32.755822 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40933
	I0904 05:54:32.755831 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.756901 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.757041 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.757824 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.757949 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.758463 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.758687 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.759169 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.759609 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.760210 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.760246 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.760440 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.760603 1120739 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0904 05:54:32.760702 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.760763 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.761280 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0904 05:54:32.761091 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.761618 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.761684 1120739 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0904 05:54:32.761698 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 05:54:32.761718 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.762568 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.762996 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 05:54:32.763291 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.763309 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.763827 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.764093 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.765255 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 05:54:32.765378 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.765823 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.765842 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.766102 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.766355 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.766410 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.766660 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.766921 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.767584 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.768214 1120739 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 05:54:32.768639 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0904 05:54:32.768858 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 05:54:32.769342 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.770060 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.770080 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.770280 1120739 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 05:54:32.770636 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.770938 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.771499 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 05:54:32.771580 1120739 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0904 05:54:32.771700 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38145
	I0904 05:54:32.772319 1120739 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0904 05:54:32.772847 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.773117 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.773268 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 05:54:32.773358 1120739 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 05:54:32.773370 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0904 05:54:32.773389 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.773602 1120739 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 05:54:32.773613 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 05:54:32.773627 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.773668 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.773693 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.774059 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.774406 1120739 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 05:54:32.774634 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.774657 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.774955 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0904 05:54:32.775500 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.775594 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 05:54:32.775717 1120739 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 05:54:32.775733 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 05:54:32.775751 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.775946 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.775961 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.776232 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0904 05:54:32.776375 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.776567 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.777093 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.777752 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 05:54:32.777801 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.777817 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.778037 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.778246 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.778517 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.778905 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.778935 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.779088 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.779273 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.779440 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.779587 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.779651 1120739 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 05:54:32.779666 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.780446 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.780551 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.780609 1120739 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 05:54:32.780624 1120739 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 05:54:32.780645 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.780988 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.781016 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.781125 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.781290 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.781332 1120739 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 05:54:32.781424 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.781628 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.781886 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.781925 1120739 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0904 05:54:32.782230 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.782254 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.782376 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.782575 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.782712 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.782886 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.783032 1120739 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 05:54:32.783048 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 05:54:32.783064 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.783721 1120739 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 05:54:32.783735 1120739 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 05:54:32.783751 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.784516 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.784979 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.785008 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.785176 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.785492 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.785987 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.786212 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.787601 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.788420 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.788456 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.788482 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.788690 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.788894 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.788925 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.788961 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.789143 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.789147 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.789277 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.789301 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.789643 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I0904 05:54:32.789644 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.789743 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.790145 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.790592 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.790615 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.791008 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.791194 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.792550 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.792874 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0904 05:54:32.793381 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.793850 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.793891 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.794131 1120739 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0904 05:54:32.794216 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.794384 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.795013 1120739 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 05:54:32.795029 1120739 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0904 05:54:32.795041 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.795245 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0904 05:54:32.795747 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.796266 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.796282 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.796701 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.797567 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:32.798517 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:32.803685 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.804354 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.804357 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.804422 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.804538 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.804728 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.804915 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.806126 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0904 05:54:32.806583 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.807062 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.807086 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.807440 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.807655 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.809177 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.810827 1120739 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 05:54:32.812145 1120739 out.go:179]   - Using image docker.io/busybox:stable
	I0904 05:54:32.813298 1120739 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 05:54:32.813316 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 05:54:32.813335 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.815421 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0904 05:54:32.815943 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:32.816300 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.816385 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:32.816405 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:32.816700 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.816732 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.816774 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:32.816912 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:32.816951 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.817094 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.817202 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.817331 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:32.818354 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:32.818543 1120739 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 05:54:32.818557 1120739 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 05:54:32.818569 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:32.820819 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.821208 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:32.821233 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:32.821354 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:32.821503 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:32.821645 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:32.821792 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	W0904 05:54:33.009012 1120739 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35412->192.168.39.193:22: read: connection reset by peer
	I0904 05:54:33.009055 1120739 retry.go:31] will retry after 173.172283ms: ssh: handshake failed: read tcp 192.168.39.1:35412->192.168.39.193:22: read: connection reset by peer
	I0904 05:54:33.053774 1120739 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 05:54:33.073053 1120739 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0904 05:54:33.186581 1120739 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35416->192.168.39.193:22: read: connection reset by peer
	I0904 05:54:33.186620 1120739 retry.go:31] will retry after 242.413077ms: ssh: handshake failed: read tcp 192.168.39.1:35416->192.168.39.193:22: read: connection reset by peer
	I0904 05:54:33.515780 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 05:54:33.548901 1120739 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 05:54:33.548939 1120739 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 05:54:33.564588 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0904 05:54:33.573005 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0904 05:54:33.581758 1120739 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 05:54:33.581787 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 05:54:33.583129 1120739 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 05:54:33.583150 1120739 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 05:54:33.685657 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 05:54:33.698295 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 05:54:33.774125 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 05:54:33.815229 1120739 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 05:54:33.815262 1120739 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 05:54:33.827468 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 05:54:33.833999 1120739 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:33.834021 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0904 05:54:33.842873 1120739 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 05:54:33.842893 1120739 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 05:54:33.851858 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 05:54:33.992688 1120739 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 05:54:33.992728 1120739 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 05:54:34.095160 1120739 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 05:54:34.095193 1120739 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 05:54:34.105762 1120739 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 05:54:34.105812 1120739 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 05:54:34.353394 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:34.367384 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 05:54:34.430940 1120739 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 05:54:34.430970 1120739 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 05:54:34.460912 1120739 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 05:54:34.460952 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 05:54:34.575192 1120739 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 05:54:34.575229 1120739 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 05:54:34.646240 1120739 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 05:54:34.646272 1120739 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 05:54:34.649617 1120739 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 05:54:34.649650 1120739 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 05:54:34.787859 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 05:54:34.832698 1120739 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 05:54:34.832728 1120739 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 05:54:34.863617 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 05:54:34.997456 1120739 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 05:54:34.997494 1120739 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 05:54:35.022403 1120739 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 05:54:35.022434 1120739 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 05:54:35.128938 1120739 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 05:54:35.128968 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 05:54:35.389115 1120739 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 05:54:35.389142 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 05:54:35.430477 1120739 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 05:54:35.430513 1120739 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 05:54:35.484872 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 05:54:35.786959 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 05:54:35.815518 1120739 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 05:54:35.815542 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 05:54:36.185014 1120739 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.131194493s)
	I0904 05:54:36.185048 1120739 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0904 05:54:36.185159 1120739 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.112060157s)
	I0904 05:54:36.186213 1120739 node_ready.go:35] waiting up to 6m0s for node "addons-691233" to be "Ready" ...
	I0904 05:54:36.190669 1120739 node_ready.go:49] node "addons-691233" is "Ready"
	I0904 05:54:36.190695 1120739 node_ready.go:38] duration metric: took 4.449543ms for node "addons-691233" to be "Ready" ...
	I0904 05:54:36.190708 1120739 api_server.go:52] waiting for apiserver process to appear ...
	I0904 05:54:36.190754 1120739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 05:54:36.316304 1120739 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 05:54:36.316332 1120739 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 05:54:36.693580 1120739 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-691233" context rescaled to 1 replicas
	I0904 05:54:36.719555 1120739 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 05:54:36.719580 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 05:54:37.009286 1120739 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 05:54:37.009322 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 05:54:37.362873 1120739 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 05:54:37.362906 1120739 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 05:54:37.547975 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 05:54:40.238200 1120739 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 05:54:40.238246 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:40.241246 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:40.241732 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:40.241763 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:40.241928 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:40.242113 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:40.242248 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:40.242427 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:40.438776 1120739 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 05:54:40.532891 1120739 addons.go:238] Setting addon gcp-auth=true in "addons-691233"
	I0904 05:54:40.532965 1120739 host.go:66] Checking if "addons-691233" exists ...
	I0904 05:54:40.533287 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:40.533326 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:40.549982 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0904 05:54:40.550474 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:40.551004 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:40.551026 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:40.551384 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:40.551996 1120739 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:54:40.552031 1120739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 05:54:40.568498 1120739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44995
	I0904 05:54:40.569071 1120739 main.go:141] libmachine: () Calling .GetVersion
	I0904 05:54:40.569496 1120739 main.go:141] libmachine: Using API Version  1
	I0904 05:54:40.569559 1120739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 05:54:40.570071 1120739 main.go:141] libmachine: () Calling .GetMachineName
	I0904 05:54:40.570290 1120739 main.go:141] libmachine: (addons-691233) Calling .GetState
	I0904 05:54:40.572003 1120739 main.go:141] libmachine: (addons-691233) Calling .DriverName
	I0904 05:54:40.572221 1120739 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 05:54:40.572245 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHHostname
	I0904 05:54:40.575022 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:40.575404 1120739 main.go:141] libmachine: (addons-691233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:5e:02", ip: ""} in network mk-addons-691233: {Iface:virbr1 ExpiryTime:2025-09-04 06:54:00 +0000 UTC Type:0 Mac:52:54:00:45:5e:02 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:addons-691233 Clientid:01:52:54:00:45:5e:02}
	I0904 05:54:40.575432 1120739 main.go:141] libmachine: (addons-691233) DBG | domain addons-691233 has defined IP address 192.168.39.193 and MAC address 52:54:00:45:5e:02 in network mk-addons-691233
	I0904 05:54:40.575609 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHPort
	I0904 05:54:40.575774 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHKeyPath
	I0904 05:54:40.575943 1120739 main.go:141] libmachine: (addons-691233) Calling .GetSSHUsername
	I0904 05:54:40.576078 1120739 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/addons-691233/id_rsa Username:docker}
	I0904 05:54:41.082187 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.566352785s)
	I0904 05:54:41.082253 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.517622679s)
	I0904 05:54:41.082327 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.39663513s)
	I0904 05:54:41.082261 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082367 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082375 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082380 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082384 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.384065078s)
	I0904 05:54:41.082419 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082436 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082284 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.509239591s)
	I0904 05:54:41.082337 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082502 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082471 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082547 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.308365179s)
	I0904 05:54:41.082587 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082603 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082619 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082625 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.255130839s)
	I0904 05:54:41.082651 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082658 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082664 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.230781784s)
	I0904 05:54:41.082686 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082695 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082767 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.72934661s)
	W0904 05:54:41.082790 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:41.082827 1120739 retry.go:31] will retry after 250.583239ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:41.082896 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.715479279s)
	I0904 05:54:41.082925 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082929 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.295039437s)
	I0904 05:54:41.082935 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.082949 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.082959 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.083043 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.219399012s)
	I0904 05:54:41.083065 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.083075 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.083150 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.598246643s)
	I0904 05:54:41.083169 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.083177 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.083552 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.296511207s)
	W0904 05:54:41.083614 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 05:54:41.083634 1120739 retry.go:31] will retry after 198.217676ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 05:54:41.083674 1120739 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.892903769s)
	I0904 05:54:41.083706 1120739 api_server.go:72] duration metric: took 8.461648436s to wait for apiserver process to appear ...
	I0904 05:54:41.083731 1120739 api_server.go:88] waiting for apiserver healthz status ...
	I0904 05:54:41.083750 1120739 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8443/healthz ...
	I0904 05:54:41.088220 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088260 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088268 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088286 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088293 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088300 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088302 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088323 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088325 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088326 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088336 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088347 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088349 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088361 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088310 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088376 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088381 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088384 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088390 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088401 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088404 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088417 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088409 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088438 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088361 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088279 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088448 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088451 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088456 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088304 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088484 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088487 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088497 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088442 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088508 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088513 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088520 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088342 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088532 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088522 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088603 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088267 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088657 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088691 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088699 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088763 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088391 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088420 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088824 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088831 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088841 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088845 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.088392 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088867 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088874 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088876 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088882 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088884 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088889 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088894 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088902 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088924 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.088933 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.088939 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.088950 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.088963 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093168 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093182 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093188 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093193 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093210 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093212 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093213 1120739 addons.go:479] Verifying addon ingress=true in "addons-691233"
	I0904 05:54:41.093219 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093239 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093270 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093355 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093368 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093378 1120739 addons.go:479] Verifying addon registry=true in "addons-691233"
	I0904 05:54:41.093462 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093217 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093503 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093513 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093317 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093556 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093568 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093576 1120739 addons.go:479] Verifying addon metrics-server=true in "addons-691233"
	I0904 05:54:41.093578 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093334 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093598 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093607 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093558 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093663 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:41.093684 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093298 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.093706 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.093691 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.095193 1120739 out.go:179] * Verifying ingress addon...
	I0904 05:54:41.095193 1120739 out.go:179] * Verifying registry addon...
	I0904 05:54:41.095903 1120739 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-691233 service yakd-dashboard -n yakd-dashboard
	
	I0904 05:54:41.097280 1120739 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 05:54:41.097550 1120739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 05:54:41.128471 1120739 api_server.go:279] https://192.168.39.193:8443/healthz returned 200:
	ok
	I0904 05:54:41.129963 1120739 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 05:54:41.129987 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:41.130086 1120739 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 05:54:41.130101 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:41.133873 1120739 api_server.go:141] control plane version: v1.34.0
	I0904 05:54:41.133904 1120739 api_server.go:131] duration metric: took 50.16456ms to wait for apiserver health ...
	I0904 05:54:41.133916 1120739 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 05:54:41.160374 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.160393 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.160698 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.160718 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	W0904 05:54:41.160821 1120739 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0904 05:54:41.183208 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:41.183238 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:41.183558 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:41.183581 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:41.196671 1120739 system_pods.go:59] 16 kube-system pods found
	I0904 05:54:41.196705 1120739 system_pods.go:61] "amd-gpu-device-plugin-7rq2w" [ee8daf49-7dbe-4e6b-bb4b-d60d8514c314] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 05:54:41.196713 1120739 system_pods.go:61] "coredns-66bc5c9577-25j5f" [4d3957e5-f728-4f0a-88e6-23fb77e7cd3b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 05:54:41.196720 1120739 system_pods.go:61] "coredns-66bc5c9577-ccfsq" [181c1ea0-7d38-4b4b-b7c5-6a289684b4fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 05:54:41.196728 1120739 system_pods.go:61] "etcd-addons-691233" [121aa01e-ee3f-438a-bbb6-ae56ce72c2ee] Running
	I0904 05:54:41.196734 1120739 system_pods.go:61] "kube-apiserver-addons-691233" [f49f07ef-afff-44d5-b914-318ae921847f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 05:54:41.196740 1120739 system_pods.go:61] "kube-controller-manager-addons-691233" [f70d0726-e6dd-49c6-8ba8-41d3ce02d7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 05:54:41.196756 1120739 system_pods.go:61] "kube-ingress-dns-minikube" [52679a53-8aa8-4c2e-acc1-56dd0dbf97a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 05:54:41.196762 1120739 system_pods.go:61] "kube-proxy-5qvc9" [d8daf4d1-c29b-4ec9-b102-d5fe8cf3b74f] Running
	I0904 05:54:41.196768 1120739 system_pods.go:61] "kube-scheduler-addons-691233" [d63861d0-bcef-4426-9b8f-04f7aa4109c5] Running
	I0904 05:54:41.196775 1120739 system_pods.go:61] "metrics-server-85b7d694d7-zsdw6" [4d80e196-44de-49cc-a421-dc451907e628] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 05:54:41.196787 1120739 system_pods.go:61] "nvidia-device-plugin-daemonset-7zlkd" [d422839b-7e87-41d0-b5fa-45d2eb76881d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 05:54:41.196798 1120739 system_pods.go:61] "registry-66898fdd98-582kf" [fa33c11f-067f-4e95-aa92-9973bc0df7da] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 05:54:41.196803 1120739 system_pods.go:61] "registry-creds-764b6fb674-578m2" [79d671d8-2113-4444-a226-371993327011] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 05:54:41.196812 1120739 system_pods.go:61] "registry-proxy-5tk68" [58650695-54df-4940-a8c6-50e3ad46a596] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 05:54:41.196816 1120739 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dzrbq" [59cab63f-dd1c-4202-a8b6-3369c4c5e0f2] Pending
	I0904 05:54:41.196821 1120739 system_pods.go:61] "storage-provisioner" [488a2d80-840e-4719-b187-4be1f50a7f90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 05:54:41.196827 1120739 system_pods.go:74] duration metric: took 62.904908ms to wait for pod list to return data ...
	I0904 05:54:41.196841 1120739 default_sa.go:34] waiting for default service account to be created ...
	I0904 05:54:41.224045 1120739 default_sa.go:45] found service account: "default"
	I0904 05:54:41.224072 1120739 default_sa.go:55] duration metric: took 27.225552ms for default service account to be created ...
	I0904 05:54:41.224083 1120739 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 05:54:41.252523 1120739 system_pods.go:86] 17 kube-system pods found
	I0904 05:54:41.252554 1120739 system_pods.go:89] "amd-gpu-device-plugin-7rq2w" [ee8daf49-7dbe-4e6b-bb4b-d60d8514c314] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0904 05:54:41.252562 1120739 system_pods.go:89] "coredns-66bc5c9577-25j5f" [4d3957e5-f728-4f0a-88e6-23fb77e7cd3b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 05:54:41.252573 1120739 system_pods.go:89] "coredns-66bc5c9577-ccfsq" [181c1ea0-7d38-4b4b-b7c5-6a289684b4fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 05:54:41.252577 1120739 system_pods.go:89] "etcd-addons-691233" [121aa01e-ee3f-438a-bbb6-ae56ce72c2ee] Running
	I0904 05:54:41.252582 1120739 system_pods.go:89] "kube-apiserver-addons-691233" [f49f07ef-afff-44d5-b914-318ae921847f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 05:54:41.252588 1120739 system_pods.go:89] "kube-controller-manager-addons-691233" [f70d0726-e6dd-49c6-8ba8-41d3ce02d7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 05:54:41.252593 1120739 system_pods.go:89] "kube-ingress-dns-minikube" [52679a53-8aa8-4c2e-acc1-56dd0dbf97a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0904 05:54:41.252596 1120739 system_pods.go:89] "kube-proxy-5qvc9" [d8daf4d1-c29b-4ec9-b102-d5fe8cf3b74f] Running
	I0904 05:54:41.252600 1120739 system_pods.go:89] "kube-scheduler-addons-691233" [d63861d0-bcef-4426-9b8f-04f7aa4109c5] Running
	I0904 05:54:41.252607 1120739 system_pods.go:89] "metrics-server-85b7d694d7-zsdw6" [4d80e196-44de-49cc-a421-dc451907e628] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0904 05:54:41.252611 1120739 system_pods.go:89] "nvidia-device-plugin-daemonset-7zlkd" [d422839b-7e87-41d0-b5fa-45d2eb76881d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0904 05:54:41.252617 1120739 system_pods.go:89] "registry-66898fdd98-582kf" [fa33c11f-067f-4e95-aa92-9973bc0df7da] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0904 05:54:41.252622 1120739 system_pods.go:89] "registry-creds-764b6fb674-578m2" [79d671d8-2113-4444-a226-371993327011] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0904 05:54:41.252630 1120739 system_pods.go:89] "registry-proxy-5tk68" [58650695-54df-4940-a8c6-50e3ad46a596] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0904 05:54:41.252633 1120739 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cwx77" [92ef5b5e-4f03-4460-ac42-9c4c8ffc9c7f] Pending
	I0904 05:54:41.252640 1120739 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dzrbq" [59cab63f-dd1c-4202-a8b6-3369c4c5e0f2] Pending
	I0904 05:54:41.252645 1120739 system_pods.go:89] "storage-provisioner" [488a2d80-840e-4719-b187-4be1f50a7f90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 05:54:41.252652 1120739 system_pods.go:126] duration metric: took 28.563582ms to wait for k8s-apps to be running ...
	I0904 05:54:41.252663 1120739 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 05:54:41.252712 1120739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 05:54:41.282854 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 05:54:41.333779 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:41.611910 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:41.612008 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:42.129000 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:42.129168 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:42.215349 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.667291501s)
	I0904 05:54:42.215403 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:42.215416 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:42.215496 1120739 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.64324852s)
	I0904 05:54:42.215567 1120739 system_svc.go:56] duration metric: took 962.888466ms WaitForService to wait for kubelet
	I0904 05:54:42.215591 1120739 kubeadm.go:578] duration metric: took 9.593534035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 05:54:42.215733 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:42.215624 1120739 node_conditions.go:102] verifying NodePressure condition ...
	I0904 05:54:42.215822 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:42.215838 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:42.215859 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:42.215869 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:42.216136 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:42.216155 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:42.216171 1120739 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-691233"
	I0904 05:54:42.217805 1120739 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0904 05:54:42.218557 1120739 out.go:179] * Verifying csi-hostpath-driver addon...
	I0904 05:54:42.219902 1120739 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0904 05:54:42.220543 1120739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 05:54:42.220817 1120739 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 05:54:42.220838 1120739 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 05:54:42.240483 1120739 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 05:54:42.240512 1120739 node_conditions.go:123] node cpu capacity is 2
	I0904 05:54:42.240525 1120739 node_conditions.go:105] duration metric: took 24.763667ms to run NodePressure ...
	I0904 05:54:42.240538 1120739 start.go:241] waiting for startup goroutines ...
	I0904 05:54:42.261522 1120739 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 05:54:42.261549 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:42.392630 1120739 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 05:54:42.392685 1120739 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 05:54:42.520680 1120739 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 05:54:42.520714 1120739 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 05:54:42.604946 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:42.606077 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:42.700638 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 05:54:42.738512 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:43.109319 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:43.109639 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:43.228577 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:43.605274 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:43.605343 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:43.731705 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:43.925169 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.642222691s)
	I0904 05:54:43.925249 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:43.925271 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:43.925596 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:43.925648 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:43.925668 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:43.925681 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:43.925686 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:43.925947 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:43.925966 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:44.107525 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:44.108906 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:44.235253 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:44.286737 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.952898495s)
	W0904 05:54:44.286802 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:44.286855 1120739 retry.go:31] will retry after 404.793024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:44.525150 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.824466362s)
	I0904 05:54:44.525235 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:44.525254 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:44.525634 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:44.525656 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:44.525674 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:54:44.525682 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:54:44.525938 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:54:44.525986 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:54:44.525967 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:54:44.527028 1120739 addons.go:479] Verifying addon gcp-auth=true in "addons-691233"
	I0904 05:54:44.528569 1120739 out.go:179] * Verifying gcp-auth addon...
	I0904 05:54:44.530545 1120739 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 05:54:44.547264 1120739 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 05:54:44.547288 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:44.605906 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:44.605976 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:44.692695 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:44.726664 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:45.038368 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:45.141237 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:45.141443 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:45.233857 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:45.538753 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:45.637340 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:45.637883 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:45.724792 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:45.905513 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.212770886s)
	W0904 05:54:45.905568 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:45.905598 1120739 retry.go:31] will retry after 517.327333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:46.034242 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:46.100705 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:46.101321 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:46.225199 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:46.423431 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:46.534357 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:46.602261 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:46.603734 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:46.723838 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:47.035706 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:47.109064 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:47.109189 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0904 05:54:47.203265 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:47.203312 1120739 retry.go:31] will retry after 811.865528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:47.225171 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:47.536119 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:47.602253 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:47.604830 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:47.725431 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:48.015352 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:48.035924 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:48.104087 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:48.104380 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:48.229715 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:48.537070 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:48.604281 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:48.605299 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:48.727106 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:49.035827 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:49.100276 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.084875648s)
	W0904 05:54:49.100318 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:49.100339 1120739 retry.go:31] will retry after 721.2808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:49.105214 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:49.105418 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:49.225726 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:49.536368 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:49.607288 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:49.609397 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:49.725621 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:49.822786 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:50.037107 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:50.108203 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:50.109819 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:50.229881 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:50.533770 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:50.601757 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:50.604233 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:50.727071 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:51.037652 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:51.052766 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.22992479s)
	W0904 05:54:51.052838 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:51.052866 1120739 retry.go:31] will retry after 2.6834484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:51.104038 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:51.104466 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:51.226646 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:51.534291 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:51.617127 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:51.617199 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:51.727099 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:52.036004 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:52.102190 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:52.102910 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:52.227921 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:52.535506 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:52.601680 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:52.602876 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:52.726510 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:53.316787 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:53.316847 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:53.316857 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:53.316856 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:53.533936 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:53.603999 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:53.604038 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:53.725538 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:53.736557 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:54.034411 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:54.101314 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:54.103461 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:54.227802 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:54.533537 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:54.603144 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:54.603241 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:54.724279 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 05:54:54.727683 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:54.727712 1120739 retry.go:31] will retry after 3.786736481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:55.035177 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:55.102820 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:55.102999 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:55.225246 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:55.538873 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:55.604163 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:55.606701 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:55.726063 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:56.034700 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:56.102119 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:56.102328 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:56.424160 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:56.569999 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:56.604427 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:56.604670 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:56.725228 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:57.037150 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:57.107755 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:57.108490 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:57.227662 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:57.536973 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:57.603343 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:57.606399 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:57.724602 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:58.274039 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:58.274643 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:58.275492 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:58.276288 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:58.515660 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:54:58.534201 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:58.601903 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:58.602761 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:58.726468 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:54:59.035007 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:59.102252 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:59.102811 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:59.226381 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 05:54:59.336870 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:59.336911 1120739 retry.go:31] will retry after 4.562299398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:54:59.535407 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:54:59.600832 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:54:59.601368 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:54:59.724598 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:00.033669 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:00.101077 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:00.101209 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:00.225053 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:00.534757 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:00.601166 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:00.601423 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:00.724968 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:01.034706 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:01.102515 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:01.102566 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:01.224988 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:01.539082 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:01.611527 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:01.611707 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:01.727104 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:02.034279 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:02.103176 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:02.104366 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:02.226764 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:02.534340 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:02.602335 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:02.605450 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:02.726683 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:03.033281 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:03.103336 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:03.107418 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:03.228425 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:03.705473 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:03.705538 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:03.706361 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:03.726312 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:03.900427 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:55:04.034821 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:04.104253 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:04.105807 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:04.225152 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:04.536893 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:04.602143 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:04.602346 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:04.727446 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:04.912845 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.012368805s)
	W0904 05:55:04.912899 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:04.912929 1120739 retry.go:31] will retry after 7.91209574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:05.036613 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:05.101101 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:05.102317 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:05.226702 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:05.534199 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:05.601756 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:05.602101 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:05.725269 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:06.034264 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:06.101189 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:06.101459 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:06.225282 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:06.534911 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:06.601328 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:06.601489 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:06.725182 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:07.034282 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:07.101298 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:07.102125 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:07.225027 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:07.536565 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:07.602195 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:07.602557 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:07.727073 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:08.035391 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:08.102166 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:08.102320 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:08.227921 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:08.536079 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:08.601883 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:08.601959 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:08.725421 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:09.043932 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:09.101477 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:09.102080 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:09.229775 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:09.533803 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:09.602021 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:09.602266 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:09.723701 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:10.036002 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:10.103459 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:10.103598 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:10.224150 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:10.533823 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:10.604591 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:10.605447 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:10.726166 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:11.034914 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:11.103763 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:11.105418 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:11.227333 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:11.534939 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:11.603239 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:11.603393 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:11.726485 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:12.034660 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:12.101678 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:12.101849 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:12.225162 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:12.534379 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:12.601705 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:12.601733 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:12.726941 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:12.825652 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:55:13.036760 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:13.103776 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:13.107351 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:13.224600 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:13.535135 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:13.604393 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:13.605713 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:13.727789 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 05:55:13.762411 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:13.762463 1120739 retry.go:31] will retry after 5.170132631s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:14.035440 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:14.101497 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:14.101563 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:14.225599 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:14.536683 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:14.602978 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:14.605817 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:14.726826 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:15.034085 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:15.101588 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:15.101930 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:15.224600 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:15.534210 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:15.602943 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:15.604170 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:15.730127 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:16.035248 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:16.102144 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:16.102200 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:16.225228 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:16.534820 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:16.601524 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:16.602008 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:16.724279 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:17.034045 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:17.102917 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:17.102986 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:17.224955 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:17.534093 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:17.602696 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:17.603309 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:17.726081 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:18.036527 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:18.103795 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:18.104377 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:18.227126 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:18.536135 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:18.704408 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:18.704509 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:18.726951 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:18.932975 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:55:19.037107 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:19.111277 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:19.111296 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:19.224776 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:19.534926 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:19.605789 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:19.605789 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:19.726893 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0904 05:55:19.826881 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:19.826921 1120739 retry.go:31] will retry after 20.447396531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:20.035347 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:20.104579 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:20.104924 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:20.225237 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:20.535187 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:20.603048 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:20.604336 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:20.724543 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:21.034021 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:21.135811 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:21.136109 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:21.235724 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:21.533633 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:21.601767 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:21.601838 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:21.725929 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:22.033594 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:22.104353 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:22.104976 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:22.224819 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:22.534233 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:22.601630 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:22.603325 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:22.724277 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:23.033478 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:23.100852 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:23.101129 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:23.224639 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:23.533668 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:23.602279 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:23.603790 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:23.724984 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:24.037046 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:24.103877 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:24.104996 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:24.226409 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:24.534191 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:24.601120 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:24.603044 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:24.724812 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:25.061382 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:25.100121 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 05:55:25.100941 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:25.225115 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:25.534102 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:25.601543 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:25.601542 1120739 kapi.go:107] duration metric: took 44.50398718s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 05:55:25.724857 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:26.034730 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:26.136202 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:26.241019 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:26.534096 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:26.601404 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:26.725125 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:27.035323 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:27.104170 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:27.230540 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:27.534126 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:27.603639 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:27.727837 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:28.044366 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:28.116055 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:28.430708 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:28.534947 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:28.602134 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:28.725235 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:29.043541 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:29.104385 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:29.225961 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:29.535338 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:29.600886 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:29.723626 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:30.034091 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:30.101513 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:30.224956 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:30.534329 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:30.600373 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:30.725051 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:31.034066 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:31.101357 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:31.224909 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:31.535333 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:31.600735 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:31.725582 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:32.038562 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:32.105388 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:32.550106 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:32.565807 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:32.616856 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:32.729529 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:33.036034 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:33.102757 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:33.224788 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:33.534820 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:33.601225 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:33.724837 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:34.034719 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:34.100835 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:34.226311 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:34.535325 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:34.601342 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:34.731239 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:35.034754 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:35.103524 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:35.225767 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:35.534321 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:35.601401 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:35.725255 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:36.034621 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:36.100765 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:36.228178 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:36.533861 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:36.600528 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:36.725879 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:37.036197 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:37.102825 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:37.226127 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:37.535680 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:37.602667 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:37.725119 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:38.034126 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:38.102513 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:38.226173 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:38.536381 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:38.601955 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:38.725379 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:39.042912 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:39.103468 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:39.228924 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:39.534693 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:39.635026 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:39.735724 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:40.037353 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:40.104417 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:40.227213 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:40.275268 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:55:40.535030 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:40.602510 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:40.733188 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:41.037281 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:41.103872 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:41.225436 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:41.534224 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:41.598717 1120739 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.323405235s)
	W0904 05:55:41.598760 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:41.598783 1120739 retry.go:31] will retry after 22.207106013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:55:41.600556 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:41.725174 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:42.034300 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:42.101053 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:42.224680 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:42.534283 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:42.603221 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:42.726991 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:43.035104 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:43.102662 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:43.232055 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:43.534863 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:43.601998 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:43.725421 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:44.034880 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:44.100930 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:44.224292 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:44.534942 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:44.634782 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:44.736017 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:45.035445 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:45.102015 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:45.224305 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:45.534711 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:45.603646 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:45.724480 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:46.034688 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:46.101264 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:46.226425 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:46.540911 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:46.642962 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:46.740669 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:47.034217 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:47.103229 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:47.223945 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:47.535025 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:47.606153 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:47.726218 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:48.034906 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:48.100884 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:48.224742 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:48.533741 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:48.601558 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:48.729937 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:49.038586 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:49.100934 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:49.227257 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:49.652931 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:49.652991 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:49.753675 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:50.033566 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:50.100968 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:50.225034 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:50.534455 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:50.600247 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:50.724604 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:51.034316 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:51.100378 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:51.224812 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:51.534069 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:51.601075 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:51.724887 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:52.033852 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:52.105607 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:52.224935 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:52.536276 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:52.603396 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:52.727955 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:53.036706 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:53.102693 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:53.225475 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:53.535616 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 05:55:53.635874 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:53.723990 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:54.034552 1120739 kapi.go:107] duration metric: took 1m9.504003401s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 05:55:54.035933 1120739 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-691233 cluster.
	I0904 05:55:54.036920 1120739 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 05:55:54.037823 1120739 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 05:55:54.100388 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:54.228689 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:54.601039 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:54.726373 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:55.101136 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:55.225475 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:55.695947 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:55.725516 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:56.101496 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:56.225131 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:56.603343 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:56.725026 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:57.105948 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:57.227411 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:57.603438 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:57.725382 1120739 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 05:55:58.101272 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:58.224462 1120739 kapi.go:107] duration metric: took 1m16.003917185s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 05:55:58.600874 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:59.102533 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:55:59.602032 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:00.101919 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:00.601685 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:01.101456 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:01.600905 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:02.102158 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:02.600847 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:03.210303 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:03.601493 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:03.806696 1120739 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0904 05:56:04.101588 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0904 05:56:04.443703 1120739 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0904 05:56:04.443813 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:56:04.443833 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:56:04.444164 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:56:04.444192 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 05:56:04.444201 1120739 main.go:141] libmachine: Making call to close driver server
	I0904 05:56:04.444164 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:56:04.444210 1120739 main.go:141] libmachine: (addons-691233) Calling .Close
	I0904 05:56:04.444560 1120739 main.go:141] libmachine: (addons-691233) DBG | Closing plugin on server side
	I0904 05:56:04.444607 1120739 main.go:141] libmachine: Successfully made call to close driver server
	I0904 05:56:04.444621 1120739 main.go:141] libmachine: Making call to close connection to plugin binary
	W0904 05:56:04.444721 1120739 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0904 05:56:04.601861 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:05.102064 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:05.601526 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:06.102076 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:06.601164 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:07.101239 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:07.602192 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:08.100712 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:08.601192 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:09.100687 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:09.601894 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:10.101387 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:10.601523 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:11.101199 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:11.601478 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:12.100678 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:12.601556 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:13.101196 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:13.601578 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:14.101023 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:14.600898 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:15.101659 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:15.601693 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:16.101503 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:16.601360 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:17.100352 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:17.601443 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:18.101119 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:18.601079 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:19.102465 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:19.601493 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:20.101855 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:20.602288 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:21.101474 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:21.601640 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:22.101526 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:22.601089 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:23.101860 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:23.602030 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:24.100954 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:24.601128 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:25.101897 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:25.602258 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:26.101723 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:26.601572 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:27.101813 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:27.601675 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:28.101234 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:28.606000 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:29.100508 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:29.602469 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:30.101956 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:30.601960 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:31.102142 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:31.601183 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:32.100943 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:32.601664 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:33.100749 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:33.602185 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:34.101236 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:34.601031 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:35.101265 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:35.601392 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:36.101975 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:36.601620 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:37.101697 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:37.601103 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:38.101284 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:38.601788 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:39.100350 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:39.601554 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:40.101084 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:40.601403 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:41.102387 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:41.602509 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:42.101308 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:42.602511 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:43.102714 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:43.601212 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:44.101962 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:44.601136 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:45.102172 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:45.601922 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:46.101080 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:46.601508 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:47.101104 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:47.601893 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:48.100562 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:48.601331 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:49.101937 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:49.601095 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:50.102243 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:50.600946 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:51.102454 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:51.602205 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:52.101129 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:52.600760 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:53.100840 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:53.601129 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:54.101186 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:54.601481 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:55.102370 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:55.601537 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:56.105078 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:56.604094 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:57.104355 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:57.604250 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:58.102062 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:58.600792 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:59.101051 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:56:59.603429 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:57:00.103513 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:57:00.602315 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:57:01.102051 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:57:01.659995 1120739 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 05:57:02.101715 1120739 kapi.go:107] duration metric: took 2m21.004427051s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 05:57:02.103346 1120739 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, registry-creds, metrics-server, ingress-dns, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0904 05:57:02.104349 1120739 addons.go:514] duration metric: took 2m29.482255203s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin storage-provisioner registry-creds metrics-server ingress-dns cloud-spanner yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0904 05:57:02.104398 1120739 start.go:246] waiting for cluster config update ...
	I0904 05:57:02.104426 1120739 start.go:255] writing updated cluster config ...
	I0904 05:57:02.104785 1120739 ssh_runner.go:195] Run: rm -f paused
	I0904 05:57:02.111832 1120739 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 05:57:02.116052 1120739 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ccfsq" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.121557 1120739 pod_ready.go:94] pod "coredns-66bc5c9577-ccfsq" is "Ready"
	I0904 05:57:02.121578 1120739 pod_ready.go:86] duration metric: took 5.500829ms for pod "coredns-66bc5c9577-ccfsq" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.124127 1120739 pod_ready.go:83] waiting for pod "etcd-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.130590 1120739 pod_ready.go:94] pod "etcd-addons-691233" is "Ready"
	I0904 05:57:02.130611 1120739 pod_ready.go:86] duration metric: took 6.46184ms for pod "etcd-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.132974 1120739 pod_ready.go:83] waiting for pod "kube-apiserver-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.142352 1120739 pod_ready.go:94] pod "kube-apiserver-addons-691233" is "Ready"
	I0904 05:57:02.142380 1120739 pod_ready.go:86] duration metric: took 9.38634ms for pod "kube-apiserver-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.146400 1120739 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.516906 1120739 pod_ready.go:94] pod "kube-controller-manager-addons-691233" is "Ready"
	I0904 05:57:02.516941 1120739 pod_ready.go:86] duration metric: took 370.519107ms for pod "kube-controller-manager-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:02.716743 1120739 pod_ready.go:83] waiting for pod "kube-proxy-5qvc9" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:03.116582 1120739 pod_ready.go:94] pod "kube-proxy-5qvc9" is "Ready"
	I0904 05:57:03.116618 1120739 pod_ready.go:86] duration metric: took 399.843928ms for pod "kube-proxy-5qvc9" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:03.315988 1120739 pod_ready.go:83] waiting for pod "kube-scheduler-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:03.716377 1120739 pod_ready.go:94] pod "kube-scheduler-addons-691233" is "Ready"
	I0904 05:57:03.716411 1120739 pod_ready.go:86] duration metric: took 400.390448ms for pod "kube-scheduler-addons-691233" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 05:57:03.716428 1120739 pod_ready.go:40] duration metric: took 1.604555601s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 05:57:03.758344 1120739 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0904 05:57:03.760519 1120739 out.go:179] * Done! kubectl is now configured to use "addons-691233" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.497106188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ebc1c8b-dd51-4990-b0fa-8eaf4e89e87a name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.497432167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adc4212fdac78243834cc19547db87d13b023dd704a0572a8c8b7212db6fb3a3,PodSandboxId:fa06bfb2ace6b470a8c9e533985f175f0faab6850f698ef9d6edd0533549fc98,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756965478916620405,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aaaa00f4-fe42-4882-a823-4b1add3972ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56b30a64f9d4a9e32580b3e38c845066ef37b2e0f65f76e36e67964c8de454,PodSandboxId:6e9f4352b5862571b3a5285d13f8958aa6044bac580c092fd33c620bcb305c36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756965428115042977,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb14d69-8ad2-4f5c-b13c-a5c8433f0de8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a48dd65cd5f716bf05e9bffd70533de9768181a6deccf334e7b5e80846baef,PodSandboxId:6c0834cd16a5ff60af69fd05f8a023185bf693c2cf2f66e485ad3c810d397cfe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756965421800380631,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-r65s8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6937bf49-983a-4f4d-859d-1afc993decae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1ffb942401cd4d84303a2b990570194ba3333a20c55b22f2398255811745519a,PodSandboxId:2a4f109457af41d82faed0e669041d9be3f28bbbd5867ade8749dc1739cd380a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756965346545317843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c5clw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52b6a1d6-4ceb-493f-bacb-9666ba658d8e,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17897e00368e18464adf9cbd74bab265c1d916cf086d938c6305852a95f854ad,PodSandboxId:1310202888fef7da7e8fad880b7bec3b57643d4c9df7172916107e8b7b5f2732,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756965346298281802,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-54h7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a1f353e3-0d41-49d4-b862-a5ad25d05ba3,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b7e4e664fc93fa90466045ce5d10f352051064a6ec81dae25766fd0efbdfb,PodSandboxId:8ba12a8cd3238b7a8c45aadbbf7ccb35bb33db3a1cd0511af59f9bd2c572af7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1756965337763459483,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-rd4xd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d2466978-8b68-44c5-824e-3a840edb5661,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c834323ef8b5f277f50446faa4f40668ff4c7bd71142c7c3c85e36649c3ccb,PodSandboxId:1600064be9ea79f39b6766498f564e6ecaf3fa5644209a0edf43af608c9856ca,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756965335078871027,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8gqn6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6e19b2fd-f636-4888-8f45-b23455e6f029,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5cf8148307047013dd816344037ecfb2d82a235b0733135969fbe9a5b135ad,PodSandboxId:9b6579c9aca277e06bc1c53b21fa27fae7623c07c5946734e2122b2bd4c74791,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756965315275470591,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52679a53-8aa8-4c2e-acc1-56dd0dbf97a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:576fed814abe43fe74dbb2cf4682c1215807f4090d88510707c8bfcae8305fe6,PodSandboxId:80cb7f02a6ec870ab9d6957f9af519deb7ed
cb815c779bc076f14aa9e84e8e5b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756965283414302047,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7rq2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8daf49-7dbe-4e6b-bb4b-d60d8514c314,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d5ea26b7b3750aaecee78b81b19029671b73798a5937b61b8c75e49ef2eac,PodSandbo
xId:41ef6fcd6fc99a5f4c473ebc6eb0ba0783914a0e470e6f5a67e47a9ff2c035ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756965281921955926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488a2d80-840e-4719-b187-4be1f50a7f90,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771c0e371688f01ad3cdaa5f892cc41bba767d0ac8c352ffdc3861240b1b28e,PodSandboxId:30c45a10
9b84849f2b75dad842e6da7219068fa73abbfba07bb2135ebfa6c497,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756965274223512705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ccfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181c1ea0-7d38-4b4b-b7c5-6a289684b4fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c7a6229e6c279522216354402eceaee4700055e167f67f3706e3ac24985966,PodSandboxId:d9f4eafbdbea7cc255c75bf0f4123c2cfe1215cb3737dc0e41ef52e10189ea2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756965273437642797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qvc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8daf4d1-c29b-4ec9-b102-d5fe8cf3b74f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d212f3f321ab4b7c516af624171bfab54f83237edba1f1da5294d3a188605ee,PodSandboxId:9baad1f510aee58a50c5376e5112306e088242c8200954b11a0ed5a073bdb53f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756965261890834723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 944a43d990ea2dc989b752e599b348b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea
a1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e4672b737bcdac8dba04d3ec0b20625f74afe4ca528b1ee6cd2433b72297ce,PodSandboxId:da8274c7848487ccc46d1a3491c2c3b0ce23fdfd3fcd3d48d07df782fb466aa2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756965261878864257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 83b67e3b4f9c2ff93fc108c7dd438522,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47316715cfbe5a105201ff478a0f0083b1084ba770797d11bb46ab5f6bcd516f,PodSandboxId:d4fe5cfbc82b19e3c6fd4145d211ce3bb9898590aa5ab02fa3d121ae7e15a8ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756965261871341236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e594df2ef107b25ceede90b6bc8e26a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf46f423a87b1f2aed90ce4853d8f5eb85470dacc08ab60120fbadfc147af9,PodSandboxId:b41343b4dfd8bdaa1dc25eda460827dfd3a77de1a22ff7bb052268a688a5a475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756965261856953385,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2f275edeaaa1ae886f00a20f3870e73,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ebc1c8b-dd51-4990-b0fa-8eaf4e89e87a name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.534828436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=554913fb-adb3-4631-b3b3-5e42048179d5 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.535172411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=554913fb-adb3-4631-b3b3-5e42048179d5 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.536450537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51ef8eeb-f10f-4e3a-8c2b-03e89395c421 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.537639327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756965621537613965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596878,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51ef8eeb-f10f-4e3a-8c2b-03e89395c421 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.538422945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65699c45-3c47-4ec8-b4f5-47e323b87ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.538495334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65699c45-3c47-4ec8-b4f5-47e323b87ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.538826658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adc4212fdac78243834cc19547db87d13b023dd704a0572a8c8b7212db6fb3a3,PodSandboxId:fa06bfb2ace6b470a8c9e533985f175f0faab6850f698ef9d6edd0533549fc98,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756965478916620405,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aaaa00f4-fe42-4882-a823-4b1add3972ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56b30a64f9d4a9e32580b3e38c845066ef37b2e0f65f76e36e67964c8de454,PodSandboxId:6e9f4352b5862571b3a5285d13f8958aa6044bac580c092fd33c620bcb305c36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756965428115042977,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb14d69-8ad2-4f5c-b13c-a5c8433f0de8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a48dd65cd5f716bf05e9bffd70533de9768181a6deccf334e7b5e80846baef,PodSandboxId:6c0834cd16a5ff60af69fd05f8a023185bf693c2cf2f66e485ad3c810d397cfe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756965421800380631,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-r65s8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6937bf49-983a-4f4d-859d-1afc993decae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1ffb942401cd4d84303a2b990570194ba3333a20c55b22f2398255811745519a,PodSandboxId:2a4f109457af41d82faed0e669041d9be3f28bbbd5867ade8749dc1739cd380a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756965346545317843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c5clw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52b6a1d6-4ceb-493f-bacb-9666ba658d8e,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17897e00368e18464adf9cbd74bab265c1d916cf086d938c6305852a95f854ad,PodSandboxId:1310202888fef7da7e8fad880b7bec3b57643d4c9df7172916107e8b7b5f2732,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756965346298281802,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-54h7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a1f353e3-0d41-49d4-b862-a5ad25d05ba3,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b7e4e664fc93fa90466045ce5d10f352051064a6ec81dae25766fd0efbdfb,PodSandboxId:8ba12a8cd3238b7a8c45aadbbf7ccb35bb33db3a1cd0511af59f9bd2c572af7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1756965337763459483,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-rd4xd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d2466978-8b68-44c5-824e-3a840edb5661,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c834323ef8b5f277f50446faa4f40668ff4c7bd71142c7c3c85e36649c3ccb,PodSandboxId:1600064be9ea79f39b6766498f564e6ecaf3fa5644209a0edf43af608c9856ca,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756965335078871027,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8gqn6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6e19b2fd-f636-4888-8f45-b23455e6f029,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5cf8148307047013dd816344037ecfb2d82a235b0733135969fbe9a5b135ad,PodSandboxId:9b6579c9aca277e06bc1c53b21fa27fae7623c07c5946734e2122b2bd4c74791,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756965315275470591,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52679a53-8aa8-4c2e-acc1-56dd0dbf97a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:576fed814abe43fe74dbb2cf4682c1215807f4090d88510707c8bfcae8305fe6,PodSandboxId:80cb7f02a6ec870ab9d6957f9af519deb7ed
cb815c779bc076f14aa9e84e8e5b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756965283414302047,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7rq2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8daf49-7dbe-4e6b-bb4b-d60d8514c314,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d5ea26b7b3750aaecee78b81b19029671b73798a5937b61b8c75e49ef2eac,PodSandbo
xId:41ef6fcd6fc99a5f4c473ebc6eb0ba0783914a0e470e6f5a67e47a9ff2c035ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756965281921955926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488a2d80-840e-4719-b187-4be1f50a7f90,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771c0e371688f01ad3cdaa5f892cc41bba767d0ac8c352ffdc3861240b1b28e,PodSandboxId:30c45a10
9b84849f2b75dad842e6da7219068fa73abbfba07bb2135ebfa6c497,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756965274223512705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ccfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181c1ea0-7d38-4b4b-b7c5-6a289684b4fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c7a6229e6c279522216354402eceaee4700055e167f67f3706e3ac24985966,PodSandboxId:d9f4eafbdbea7cc255c75bf0f4123c2cfe1215cb3737dc0e41ef52e10189ea2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756965273437642797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qvc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8daf4d1-c29b-4ec9-b102-d5fe8cf3b74f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d212f3f321ab4b7c516af624171bfab54f83237edba1f1da5294d3a188605ee,PodSandboxId:9baad1f510aee58a50c5376e5112306e088242c8200954b11a0ed5a073bdb53f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756965261890834723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 944a43d990ea2dc989b752e599b348b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea
a1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e4672b737bcdac8dba04d3ec0b20625f74afe4ca528b1ee6cd2433b72297ce,PodSandboxId:da8274c7848487ccc46d1a3491c2c3b0ce23fdfd3fcd3d48d07df782fb466aa2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756965261878864257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 83b67e3b4f9c2ff93fc108c7dd438522,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47316715cfbe5a105201ff478a0f0083b1084ba770797d11bb46ab5f6bcd516f,PodSandboxId:d4fe5cfbc82b19e3c6fd4145d211ce3bb9898590aa5ab02fa3d121ae7e15a8ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756965261871341236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e594df2ef107b25ceede90b6bc8e26a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf46f423a87b1f2aed90ce4853d8f5eb85470dacc08ab60120fbadfc147af9,PodSandboxId:b41343b4dfd8bdaa1dc25eda460827dfd3a77de1a22ff7bb052268a688a5a475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756965261856953385,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2f275edeaaa1ae886f00a20f3870e73,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65699c45-3c47-4ec8-b4f5-47e323b87ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.572222585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cebeeca2-f4b9-4f91-80b8-d3a699d4900a name=/runtime.v1.RuntimeService/Version
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.572385761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cebeeca2-f4b9-4f91-80b8-d3a699d4900a name=/runtime.v1.RuntimeService/Version
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.573294834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddabab14-77a1-494f-92a9-cdc644820172 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.573702034Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.573946539Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.574768899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756965621574741596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596878,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddabab14-77a1-494f-92a9-cdc644820172 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.575560742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8838c295-dfd1-4862-8778-71d2f0108928 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.575633182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8838c295-dfd1-4862-8778-71d2f0108928 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.576286315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adc4212fdac78243834cc19547db87d13b023dd704a0572a8c8b7212db6fb3a3,PodSandboxId:fa06bfb2ace6b470a8c9e533985f175f0faab6850f698ef9d6edd0533549fc98,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756965478916620405,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aaaa00f4-fe42-4882-a823-4b1add3972ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56b30a64f9d4a9e32580b3e38c845066ef37b2e0f65f76e36e67964c8de454,PodSandboxId:6e9f4352b5862571b3a5285d13f8958aa6044bac580c092fd33c620bcb305c36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756965428115042977,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb14d69-8ad2-4f5c-b13c-a5c8433f0de8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a48dd65cd5f716bf05e9bffd70533de9768181a6deccf334e7b5e80846baef,PodSandboxId:6c0834cd16a5ff60af69fd05f8a023185bf693c2cf2f66e485ad3c810d397cfe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756965421800380631,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-r65s8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6937bf49-983a-4f4d-859d-1afc993decae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1ffb942401cd4d84303a2b990570194ba3333a20c55b22f2398255811745519a,PodSandboxId:2a4f109457af41d82faed0e669041d9be3f28bbbd5867ade8749dc1739cd380a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756965346545317843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c5clw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52b6a1d6-4ceb-493f-bacb-9666ba658d8e,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17897e00368e18464adf9cbd74bab265c1d916cf086d938c6305852a95f854ad,PodSandboxId:1310202888fef7da7e8fad880b7bec3b57643d4c9df7172916107e8b7b5f2732,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756965346298281802,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-54h7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a1f353e3-0d41-49d4-b862-a5ad25d05ba3,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b7e4e664fc93fa90466045ce5d10f352051064a6ec81dae25766fd0efbdfb,PodSandboxId:8ba12a8cd3238b7a8c45aadbbf7ccb35bb33db3a1cd0511af59f9bd2c572af7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1756965337763459483,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-rd4xd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d2466978-8b68-44c5-824e-3a840edb5661,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c834323ef8b5f277f50446faa4f40668ff4c7bd71142c7c3c85e36649c3ccb,PodSandboxId:1600064be9ea79f39b6766498f564e6ecaf3fa5644209a0edf43af608c9856ca,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756965335078871027,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8gqn6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6e19b2fd-f636-4888-8f45-b23455e6f029,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5cf8148307047013dd816344037ecfb2d82a235b0733135969fbe9a5b135ad,PodSandboxId:9b6579c9aca277e06bc1c53b21fa27fae7623c07c5946734e2122b2bd4c74791,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756965315275470591,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52679a53-8aa8-4c2e-acc1-56dd0dbf97a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:576fed814abe43fe74dbb2cf4682c1215807f4090d88510707c8bfcae8305fe6,PodSandboxId:80cb7f02a6ec870ab9d6957f9af519deb7ed
cb815c779bc076f14aa9e84e8e5b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756965283414302047,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7rq2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8daf49-7dbe-4e6b-bb4b-d60d8514c314,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d5ea26b7b3750aaecee78b81b19029671b73798a5937b61b8c75e49ef2eac,PodSandbo
xId:41ef6fcd6fc99a5f4c473ebc6eb0ba0783914a0e470e6f5a67e47a9ff2c035ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756965281921955926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488a2d80-840e-4719-b187-4be1f50a7f90,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771c0e371688f01ad3cdaa5f892cc41bba767d0ac8c352ffdc3861240b1b28e,PodSandboxId:30c45a10
9b84849f2b75dad842e6da7219068fa73abbfba07bb2135ebfa6c497,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756965274223512705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ccfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181c1ea0-7d38-4b4b-b7c5-6a289684b4fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c7a6229e6c279522216354402eceaee4700055e167f67f3706e3ac24985966,PodSandboxId:d9f4eafbdbea7cc255c75bf0f4123c2cfe1215cb3737dc0e41ef52e10189ea2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756965273437642797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qvc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8daf4d1-c29b-4ec9-b102-d5fe8cf3b74f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d212f3f321ab4b7c516af624171bfab54f83237edba1f1da5294d3a188605ee,PodSandboxId:9baad1f510aee58a50c5376e5112306e088242c8200954b11a0ed5a073bdb53f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756965261890834723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 944a43d990ea2dc989b752e599b348b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea
a1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e4672b737bcdac8dba04d3ec0b20625f74afe4ca528b1ee6cd2433b72297ce,PodSandboxId:da8274c7848487ccc46d1a3491c2c3b0ce23fdfd3fcd3d48d07df782fb466aa2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756965261878864257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 83b67e3b4f9c2ff93fc108c7dd438522,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47316715cfbe5a105201ff478a0f0083b1084ba770797d11bb46ab5f6bcd516f,PodSandboxId:d4fe5cfbc82b19e3c6fd4145d211ce3bb9898590aa5ab02fa3d121ae7e15a8ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756965261871341236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e594df2ef107b25ceede90b6bc8e26a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf46f423a87b1f2aed90ce4853d8f5eb85470dacc08ab60120fbadfc147af9,PodSandboxId:b41343b4dfd8bdaa1dc25eda460827dfd3a77de1a22ff7bb052268a688a5a475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756965261856953385,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2f275edeaaa1ae886f00a20f3870e73,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8838c295-dfd1-4862-8778-71d2f0108928 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.610355267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c10c2394-091b-42e9-9fb3-55ebb5e87750 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.610424022Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c10c2394-091b-42e9-9fb3-55ebb5e87750 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.611500527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4eeabf5d-e42b-4244-9e01-5a001da83a2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.612733676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756965621612705769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596878,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4eeabf5d-e42b-4244-9e01-5a001da83a2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.613252677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e59e314-33d0-430a-9d9f-16a808f393c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.613537706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e59e314-33d0-430a-9d9f-16a808f393c5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:00:21 addons-691233 crio[828]: time="2025-09-04 06:00:21.614183273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adc4212fdac78243834cc19547db87d13b023dd704a0572a8c8b7212db6fb3a3,PodSandboxId:fa06bfb2ace6b470a8c9e533985f175f0faab6850f698ef9d6edd0533549fc98,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756965478916620405,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aaaa00f4-fe42-4882-a823-4b1add3972ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a56b30a64f9d4a9e32580b3e38c845066ef37b2e0f65f76e36e67964c8de454,PodSandboxId:6e9f4352b5862571b3a5285d13f8958aa6044bac580c092fd33c620bcb305c36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756965428115042977,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1bb14d69-8ad2-4f5c-b13c-a5c8433f0de8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a48dd65cd5f716bf05e9bffd70533de9768181a6deccf334e7b5e80846baef,PodSandboxId:6c0834cd16a5ff60af69fd05f8a023185bf693c2cf2f66e485ad3c810d397cfe,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756965421800380631,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-r65s8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6937bf49-983a-4f4d-859d-1afc993decae,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1ffb942401cd4d84303a2b990570194ba3333a20c55b22f2398255811745519a,PodSandboxId:2a4f109457af41d82faed0e669041d9be3f28bbbd5867ade8749dc1739cd380a,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756965346545317843,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c5clw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52b6a1d6-4ceb-493f-bacb-9666ba658d8e,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17897e00368e18464adf9cbd74bab265c1d916cf086d938c6305852a95f854ad,PodSandboxId:1310202888fef7da7e8fad880b7bec3b57643d4c9df7172916107e8b7b5f2732,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756965346298281802,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-54h7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a1f353e3-0d41-49d4-b862-a5ad25d05ba3,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9b7e4e664fc93fa90466045ce5d10f352051064a6ec81dae25766fd0efbdfb,PodSandboxId:8ba12a8cd3238b7a8c45aadbbf7ccb35bb33db3a1cd0511af59f9bd2c572af7f,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1756965337763459483,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-rd4xd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d2466978-8b68-44c5-824e-3a840edb5661,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9c834323ef8b5f277f50446faa4f40668ff4c7bd71142c7c3c85e36649c3ccb,PodSandboxId:1600064be9ea79f39b6766498f564e6ecaf3fa5644209a0edf43af608c9856ca,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c0845d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756965335078871027,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8gqn6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 6e19b2fd-f636-4888-8f45-b23455e6f029,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be5cf8148307047013dd816344037ecfb2d82a235b0733135969fbe9a5b135ad,PodSandboxId:9b6579c9aca277e06bc1c53b21fa27fae7623c07c5946734e2122b2bd4c74791,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756965315275470591,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52679a53-8aa8-4c2e-acc1-56dd0dbf97a1,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:576fed814abe43fe74dbb2cf4682c1215807f4090d88510707c8bfcae8305fe6,PodSandboxId:80cb7f02a6ec870ab9d6957f9af519deb7ed
cb815c779bc076f14aa9e84e8e5b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756965283414302047,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7rq2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8daf49-7dbe-4e6b-bb4b-d60d8514c314,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d5ea26b7b3750aaecee78b81b19029671b73798a5937b61b8c75e49ef2eac,PodSandbo
xId:41ef6fcd6fc99a5f4c473ebc6eb0ba0783914a0e470e6f5a67e47a9ff2c035ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756965281921955926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 488a2d80-840e-4719-b187-4be1f50a7f90,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b771c0e371688f01ad3cdaa5f892cc41bba767d0ac8c352ffdc3861240b1b28e,PodSandboxId:30c45a10
9b84849f2b75dad842e6da7219068fa73abbfba07bb2135ebfa6c497,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756965274223512705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ccfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181c1ea0-7d38-4b4b-b7c5-6a289684b4fa,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c7a6229e6c279522216354402eceaee4700055e167f67f3706e3ac24985966,PodSandboxId:d9f4eafbdbea7cc255c75bf0f4123c2cfe1215cb3737dc0e41ef52e10189ea2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756965273437642797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qvc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8daf4d1-c29b-4ec9-b102-d5fe8cf3b74f,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d212f3f321ab4b7c516af624171bfab54f83237edba1f1da5294d3a188605ee,PodSandboxId:9baad1f510aee58a50c5376e5112306e088242c8200954b11a0ed5a073bdb53f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756965261890834723,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 944a43d990ea2dc989b752e599b348b0,},Annotations:map[string]string{io.kubernetes.container.hash: 7ea
a1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e4672b737bcdac8dba04d3ec0b20625f74afe4ca528b1ee6cd2433b72297ce,PodSandboxId:da8274c7848487ccc46d1a3491c2c3b0ce23fdfd3fcd3d48d07df782fb466aa2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756965261878864257,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 83b67e3b4f9c2ff93fc108c7dd438522,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47316715cfbe5a105201ff478a0f0083b1084ba770797d11bb46ab5f6bcd516f,PodSandboxId:d4fe5cfbc82b19e3c6fd4145d211ce3bb9898590aa5ab02fa3d121ae7e15a8ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756965261871341236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e594df2ef107b25ceede90b6bc8e26a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bf46f423a87b1f2aed90ce4853d8f5eb85470dacc08ab60120fbadfc147af9,PodSandboxId:b41343b4dfd8bdaa1dc25eda460827dfd3a77de1a22ff7bb052268a688a5a475,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756965261856953385,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-691233,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2f275edeaaa1ae886f00a20f3870e73,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e59e314-33d0-430a-9d9f-16a808f393c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	adc4212fdac78       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   fa06bfb2ace6b       nginx
	1a56b30a64f9d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6e9f4352b5862       busybox
	e9a48dd65cd5f       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   6c0834cd16a5f       ingress-nginx-controller-9cc49f96f-r65s8
	1ffb942401cd4       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     1                   2a4f109457af4       ingress-nginx-admission-patch-c5clw
	17897e00368e1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   1310202888fef       ingress-nginx-admission-create-54h7w
	1d9b7e4e664fc       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   8ba12a8cd3238       local-path-provisioner-648f6765c9-rd4xd
	e9c834323ef8b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            4 minutes ago       Running             gadget                    0                   1600064be9ea7       gadget-8gqn6
	be5cf81483070       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago       Running             minikube-ingress-dns      0                   9b6579c9aca27       kube-ingress-dns-minikube
	576fed814abe4       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   80cb7f02a6ec8       amd-gpu-device-plugin-7rq2w
	707d5ea26b7b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   41ef6fcd6fc99       storage-provisioner
	b771c0e371688       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   30c45a109b848       coredns-66bc5c9577-ccfsq
	d3c7a6229e6c2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   d9f4eafbdbea7       kube-proxy-5qvc9
	6d212f3f321ab       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   9baad1f510aee       kube-controller-manager-addons-691233
	41e4672b737bc       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   da8274c784848       kube-apiserver-addons-691233
	47316715cfbe5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   d4fe5cfbc82b1       etcd-addons-691233
	23bf46f423a87       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   b41343b4dfd8b       kube-scheduler-addons-691233
	
	
	==> coredns [b771c0e371688f01ad3cdaa5f892cc41bba767d0ac8c352ffdc3861240b1b28e] <==
	[INFO] 10.244.0.8:56511 - 51672 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001151974s
	[INFO] 10.244.0.8:56511 - 60351 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000257215s
	[INFO] 10.244.0.8:56511 - 58094 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000273395s
	[INFO] 10.244.0.8:56511 - 47934 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076951s
	[INFO] 10.244.0.8:56511 - 10516 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000169244s
	[INFO] 10.244.0.8:56511 - 61784 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00013784s
	[INFO] 10.244.0.8:56511 - 2228 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000202679s
	[INFO] 10.244.0.8:55250 - 886 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135981s
	[INFO] 10.244.0.8:55250 - 614 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152243s
	[INFO] 10.244.0.8:44474 - 50628 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128459s
	[INFO] 10.244.0.8:44474 - 50886 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000319863s
	[INFO] 10.244.0.8:36628 - 52374 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007076s
	[INFO] 10.244.0.8:36628 - 52116 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0002192s
	[INFO] 10.244.0.8:42113 - 21087 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085611s
	[INFO] 10.244.0.8:42113 - 20889 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000316008s
	[INFO] 10.244.0.22:47055 - 30882 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000637089s
	[INFO] 10.244.0.22:60367 - 3185 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157425s
	[INFO] 10.244.0.22:36788 - 20817 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128738s
	[INFO] 10.244.0.22:41206 - 10574 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000512077s
	[INFO] 10.244.0.22:60633 - 29714 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126041s
	[INFO] 10.244.0.22:49129 - 31893 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100502s
	[INFO] 10.244.0.22:57544 - 49271 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.003236769s
	[INFO] 10.244.0.22:39269 - 55724 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003572514s
	[INFO] 10.244.0.26:59999 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00102423s
	[INFO] 10.244.0.26:37223 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108788s
	
	
	==> describe nodes <==
	Name:               addons-691233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-691233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=addons-691233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T05_54_27_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-691233
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 05:54:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-691233
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 06:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 05:58:02 +0000   Thu, 04 Sep 2025 05:54:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 05:58:02 +0000   Thu, 04 Sep 2025 05:54:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 05:58:02 +0000   Thu, 04 Sep 2025 05:54:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 05:58:02 +0000   Thu, 04 Sep 2025 05:54:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    addons-691233
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 08f439593f5345dc8e9c806efc36160d
	  System UUID:                08f43959-3f53-45dc-8e9c-806efc36160d
	  Boot ID:                    8b495dba-d433-493c-a83d-b1b84d7fe3db
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  default                     hello-world-app-5d498dc89-ksgwm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  gadget                      gadget-8gqn6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-r65s8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m41s
	  kube-system                 amd-gpu-device-plugin-7rq2w                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 coredns-66bc5c9577-ccfsq                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m49s
	  kube-system                 etcd-addons-691233                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m54s
	  kube-system                 kube-apiserver-addons-691233                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 kube-controller-manager-addons-691233       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-5qvc9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-scheduler-addons-691233                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  local-path-storage          local-path-provisioner-648f6765c9-rd4xd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m47s  kube-proxy       
	  Normal  Starting                 5m54s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m54s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m54s  kubelet          Node addons-691233 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s  kubelet          Node addons-691233 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s  kubelet          Node addons-691233 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m53s  kubelet          Node addons-691233 status is now: NodeReady
	  Normal  RegisteredNode           5m50s  node-controller  Node addons-691233 event: Registered Node addons-691233 in Controller
	
	
	==> dmesg <==
	[  +0.002444] kauditd_printk_skb: 311 callbacks suppressed
	[ +13.932787] kauditd_printk_skb: 368 callbacks suppressed
	[Sep 4 05:55] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.275442] kauditd_printk_skb: 11 callbacks suppressed
	[  +3.624575] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.981565] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.364204] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.510357] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.428700] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.000049] kauditd_printk_skb: 150 callbacks suppressed
	[  +4.559952] kauditd_printk_skb: 103 callbacks suppressed
	[Sep 4 05:56] kauditd_printk_skb: 56 callbacks suppressed
	[Sep 4 05:57] kauditd_printk_skb: 29 callbacks suppressed
	[  +3.448288] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.912836] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.163060] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.085547] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000505] kauditd_printk_skb: 37 callbacks suppressed
	[  +3.733862] kauditd_printk_skb: 99 callbacks suppressed
	[  +3.760001] kauditd_printk_skb: 152 callbacks suppressed
	[  +2.816278] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 4 05:58] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.777607] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 4 06:00] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [47316715cfbe5a105201ff478a0f0083b1084ba770797d11bb46ab5f6bcd516f] <==
	{"level":"warn","ts":"2025-09-04T05:57:34.606352Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T05:57:34.235371Z","time spent":"370.895351ms","remote":"127.0.0.1:42958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1318,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1274 >> failure:<>"}
	{"level":"warn","ts":"2025-09-04T05:57:34.606374Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T05:57:34.296558Z","time spent":"309.811779ms","remote":"127.0.0.1:42924","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":846,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 "}
	{"level":"warn","ts":"2025-09-04T05:57:34.606243Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"363.973557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T05:57:34.606591Z","caller":"traceutil/trace.go:172","msg":"trace[1045428900] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1447; }","duration":"364.335719ms","start":"2025-09-04T05:57:34.242249Z","end":"2025-09-04T05:57:34.606585Z","steps":["trace[1045428900] 'agreement among raft nodes before linearized reading'  (duration: 363.947309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T05:57:34.606614Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T05:57:34.242229Z","time spent":"364.375521ms","remote":"127.0.0.1:42640","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-09-04T05:57:34.606659Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.160509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T05:57:34.606679Z","caller":"traceutil/trace.go:172","msg":"trace[717421224] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1447; }","duration":"237.182291ms","start":"2025-09-04T05:57:34.369492Z","end":"2025-09-04T05:57:34.606674Z","steps":["trace[717421224] 'agreement among raft nodes before linearized reading'  (duration: 237.150684ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T05:57:51.300141Z","caller":"traceutil/trace.go:172","msg":"trace[1069825662] linearizableReadLoop","detail":"{readStateIndex:1698; appliedIndex:1698; }","duration":"371.801913ms","start":"2025-09-04T05:57:50.928323Z","end":"2025-09-04T05:57:51.300125Z","steps":["trace[1069825662] 'read index received'  (duration: 371.796517ms)","trace[1069825662] 'applied index is now lower than readState.Index'  (duration: 4.511µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T05:57:51.300261Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"371.923825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T05:57:51.300278Z","caller":"traceutil/trace.go:172","msg":"trace[1891622887] range","detail":"{range_begin:/registry/horizontalpodautoscalers; range_end:; response_count:0; response_revision:1627; }","duration":"371.955968ms","start":"2025-09-04T05:57:50.928317Z","end":"2025-09-04T05:57:51.300273Z","steps":["trace[1891622887] 'agreement among raft nodes before linearized reading'  (duration: 371.898419ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T05:57:51.300298Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T05:57:50.928298Z","time spent":"371.995432ms","remote":"127.0.0.1:43014","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":0,"response size":29,"request content":"key:\"/registry/horizontalpodautoscalers\" limit:1 "}
	{"level":"warn","ts":"2025-09-04T05:57:51.302006Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"353.52531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T05:57:51.302037Z","caller":"traceutil/trace.go:172","msg":"trace[81868966] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1628; }","duration":"353.562115ms","start":"2025-09-04T05:57:50.948467Z","end":"2025-09-04T05:57:51.302029Z","steps":["trace[81868966] 'agreement among raft nodes before linearized reading'  (duration: 353.506011ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T05:57:51.302054Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T05:57:50.948452Z","time spent":"353.597607ms","remote":"127.0.0.1:42958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-09-04T05:57:51.302337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"291.761207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T05:57:51.302357Z","caller":"traceutil/trace.go:172","msg":"trace[134681894] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1628; }","duration":"291.781329ms","start":"2025-09-04T05:57:51.010569Z","end":"2025-09-04T05:57:51.302350Z","steps":["trace[134681894] 'agreement among raft nodes before linearized reading'  (duration: 291.750588ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T05:57:51.302615Z","caller":"traceutil/trace.go:172","msg":"trace[806285710] transaction","detail":"{read_only:false; response_revision:1628; number_of_response:1; }","duration":"378.973761ms","start":"2025-09-04T05:57:50.923633Z","end":"2025-09-04T05:57:51.302607Z","steps":["trace[806285710] 'process raft request'  (duration: 376.854759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T05:57:51.302677Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T05:57:50.923615Z","time spent":"379.014702ms","remote":"127.0.0.1:43106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1573 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-09-04T05:57:57.436783Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.669772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T05:57:57.440056Z","caller":"traceutil/trace.go:172","msg":"trace[2088262784] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1658; }","duration":"142.947912ms","start":"2025-09-04T05:57:57.297089Z","end":"2025-09-04T05:57:57.440037Z","steps":["trace[2088262784] 'agreement among raft nodes before linearized reading'  (duration: 39.965154ms)","trace[2088262784] 'range keys from in-memory index tree'  (duration: 99.683985ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T05:57:57.437490Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.126381ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517762287900815203 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/registry-creds-764b6fb674-578m2\" mod_revision:1658 > success:<request_delete_range:<key:\"/registry/pods/kube-system/registry-creds-764b6fb674-578m2\" > > failure:<request_range:<key:\"/registry/pods/kube-system/registry-creds-764b6fb674-578m2\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-09-04T05:57:57.440310Z","caller":"traceutil/trace.go:172","msg":"trace[965890850] linearizableReadLoop","detail":"{readStateIndex:1732; appliedIndex:1731; }","duration":"103.299936ms","start":"2025-09-04T05:57:57.337002Z","end":"2025-09-04T05:57:57.440302Z","steps":["trace[965890850] 'read index received'  (duration: 40.604µs)","trace[965890850] 'applied index is now lower than readState.Index'  (duration: 103.258632ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-04T05:57:57.440454Z","caller":"traceutil/trace.go:172","msg":"trace[1541736199] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1659; }","duration":"155.234158ms","start":"2025-09-04T05:57:57.285212Z","end":"2025-09-04T05:57:57.440447Z","steps":["trace[1541736199] 'process raft request'  (duration: 51.894979ms)","trace[1541736199] 'compare'  (duration: 100.0529ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T05:57:57.440642Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.232707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" limit:1 ","response":"range_response_count:1 size:228"}
	{"level":"info","ts":"2025-09-04T05:57:57.440664Z","caller":"traceutil/trace.go:172","msg":"trace[755726564] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/pvc-protection-controller; range_end:; response_count:1; response_revision:1659; }","duration":"141.26374ms","start":"2025-09-04T05:57:57.299394Z","end":"2025-09-04T05:57:57.440658Z","steps":["trace[755726564] 'agreement among raft nodes before linearized reading'  (duration: 141.158335ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:00:21 up 6 min,  0 users,  load average: 0.32, 0.99, 0.58
	Linux addons-691233 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Sep  3 00:15:45 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [41e4672b737bcdac8dba04d3ec0b20625f74afe4ca528b1ee6cd2433b72297ce] <==
	E0904 05:57:14.529289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:54160: use of closed network connection
	E0904 05:57:14.702219       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:54190: use of closed network connection
	I0904 05:57:23.898484       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.159.62"}
	I0904 05:57:33.625826       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0904 05:57:47.679166       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 05:57:47.938286       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.0.252"}
	I0904 05:57:54.804821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 05:57:58.480149       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0904 05:58:16.132457       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 05:58:16.132497       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 05:58:16.163567       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 05:58:16.163606       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 05:58:16.167763       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 05:58:16.167840       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 05:58:16.198753       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 05:58:16.198967       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 05:58:16.233080       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 05:58:16.233129       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 05:58:17.168213       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 05:58:17.234606       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0904 05:58:17.248039       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0904 05:58:23.868411       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 05:59:19.671964       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 05:59:30.379130       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 06:00:20.379272       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.102.253"}
	
	
	==> kube-controller-manager [6d212f3f321ab4b7c516af624171bfab54f83237edba1f1da5294d3a188605ee] <==
	I0904 05:58:31.676767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 05:58:31.726674       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0904 05:58:31.726768       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0904 05:58:33.038347       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:58:33.039315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:58:34.168622       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:58:34.169474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:58:36.680729       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:58:36.681600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:58:51.874971       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:58:51.875990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:58:56.691042       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:58:56.692535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:58:57.966762       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:58:57.967861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:59:27.850720       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:59:27.851686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:59:28.162412       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:59:28.163349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 05:59:39.922504       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 05:59:39.923532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:00:13.670269       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:00:13.671583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0904 06:00:14.967662       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0904 06:00:14.968744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [d3c7a6229e6c279522216354402eceaee4700055e167f67f3706e3ac24985966] <==
	I0904 05:54:33.885828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 05:54:33.990779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 05:54:33.990809       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.193"]
	E0904 05:54:34.008463       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 05:54:34.164245       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 05:54:34.164324       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 05:54:34.164356       1 server_linux.go:132] "Using iptables Proxier"
	I0904 05:54:34.195515       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 05:54:34.196566       1 server.go:527] "Version info" version="v1.34.0"
	I0904 05:54:34.196579       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 05:54:34.214065       1 config.go:200] "Starting service config controller"
	I0904 05:54:34.214104       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 05:54:34.214140       1 config.go:106] "Starting endpoint slice config controller"
	I0904 05:54:34.214144       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 05:54:34.214310       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 05:54:34.214316       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 05:54:34.219365       1 config.go:309] "Starting node config controller"
	I0904 05:54:34.219374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 05:54:34.219380       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 05:54:34.316337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 05:54:34.316377       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 05:54:34.316408       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [23bf46f423a87b1f2aed90ce4853d8f5eb85470dacc08ab60120fbadfc147af9] <==
	E0904 05:54:24.562867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 05:54:24.563734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 05:54:24.563762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 05:54:24.563952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 05:54:24.564191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 05:54:24.564343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 05:54:24.564406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 05:54:24.564519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 05:54:24.564938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 05:54:24.565002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 05:54:24.565167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 05:54:24.565361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 05:54:24.565433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 05:54:25.407003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 05:54:25.451442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 05:54:25.480148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 05:54:25.545785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 05:54:25.550167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 05:54:25.555296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 05:54:25.563494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 05:54:25.566649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 05:54:25.578369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 05:54:25.641354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 05:54:25.696109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0904 05:54:27.955438       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 05:58:37 addons-691233 kubelet[1507]: E0904 05:58:37.594743    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965517594319974  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:58:37 addons-691233 kubelet[1507]: E0904 05:58:37.594773    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965517594319974  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:58:47 addons-691233 kubelet[1507]: E0904 05:58:47.598212    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965527597815237  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:58:47 addons-691233 kubelet[1507]: E0904 05:58:47.598255    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965527597815237  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:58:57 addons-691233 kubelet[1507]: E0904 05:58:57.601733    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965537601359974  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:58:57 addons-691233 kubelet[1507]: E0904 05:58:57.601768    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965537601359974  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:07 addons-691233 kubelet[1507]: E0904 05:59:07.605109    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965547604640700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:07 addons-691233 kubelet[1507]: E0904 05:59:07.605140    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965547604640700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:17 addons-691233 kubelet[1507]: E0904 05:59:17.608111    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965557607469816  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:17 addons-691233 kubelet[1507]: E0904 05:59:17.608152    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965557607469816  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:27 addons-691233 kubelet[1507]: E0904 05:59:27.610920    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965567610278419  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:27 addons-691233 kubelet[1507]: E0904 05:59:27.611669    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965567610278419  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:29 addons-691233 kubelet[1507]: I0904 05:59:29.212030    1507 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 04 05:59:37 addons-691233 kubelet[1507]: I0904 05:59:37.212606    1507 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7rq2w" secret="" err="secret \"gcp-auth\" not found"
	Sep 04 05:59:37 addons-691233 kubelet[1507]: E0904 05:59:37.614344    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965577613997689  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:37 addons-691233 kubelet[1507]: E0904 05:59:37.614369    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965577613997689  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:47 addons-691233 kubelet[1507]: E0904 05:59:47.617332    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965587616965585  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:47 addons-691233 kubelet[1507]: E0904 05:59:47.617373    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965587616965585  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:57 addons-691233 kubelet[1507]: E0904 05:59:57.619809    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965597619360680  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 05:59:57 addons-691233 kubelet[1507]: E0904 05:59:57.619854    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965597619360680  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 06:00:07 addons-691233 kubelet[1507]: E0904 06:00:07.622804    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965607622304188  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 06:00:07 addons-691233 kubelet[1507]: E0904 06:00:07.622832    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965607622304188  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 06:00:17 addons-691233 kubelet[1507]: E0904 06:00:17.625547    1507 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756965617625026371  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 06:00:17 addons-691233 kubelet[1507]: E0904 06:00:17.625573    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756965617625026371  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 04 06:00:20 addons-691233 kubelet[1507]: I0904 06:00:20.435386    1507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4md7g\" (UniqueName: \"kubernetes.io/projected/14233a47-46d3-40a0-b1e8-3aa6aa320c43-kube-api-access-4md7g\") pod \"hello-world-app-5d498dc89-ksgwm\" (UID: \"14233a47-46d3-40a0-b1e8-3aa6aa320c43\") " pod="default/hello-world-app-5d498dc89-ksgwm"
	
	
	==> storage-provisioner [707d5ea26b7b3750aaecee78b81b19029671b73798a5937b61b8c75e49ef2eac] <==
	W0904 05:59:56.907410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 05:59:58.910664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 05:59:58.918047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:00.921631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:00.927122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:02.930016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:02.937995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:04.941154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:04.947452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:06.951094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:06.957930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:08.960601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:08.966387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:10.970292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:10.975268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:12.978568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:12.983678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:14.987131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:14.995562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:16.999475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:17.004672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:19.007504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:19.014382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:21.019416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 06:00:21.026076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-691233 -n addons-691233
helpers_test.go:269: (dbg) Run:  kubectl --context addons-691233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-ksgwm ingress-nginx-admission-create-54h7w ingress-nginx-admission-patch-c5clw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-691233 describe pod hello-world-app-5d498dc89-ksgwm ingress-nginx-admission-create-54h7w ingress-nginx-admission-patch-c5clw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-691233 describe pod hello-world-app-5d498dc89-ksgwm ingress-nginx-admission-create-54h7w ingress-nginx-admission-patch-c5clw: exit status 1 (64.099892ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-ksgwm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-691233/192.168.39.193
	Start Time:       Thu, 04 Sep 2025 06:00:20 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4md7g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4md7g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-ksgwm to addons-691233
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-54h7w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-c5clw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-691233 describe pod hello-world-app-5d498dc89-ksgwm ingress-nginx-admission-create-54h7w ingress-nginx-admission-patch-c5clw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-691233 addons disable ingress-dns --alsologtostderr -v=1: (1.516960348s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-691233 addons disable ingress --alsologtostderr -v=1: (7.762387301s)
--- FAIL: TestAddons/parallel/Ingress (164.68s)

                                                
                                    
x
+
TestPreload (174.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-962163 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-962163 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m41.705015571s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-962163 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-962163 image pull gcr.io/k8s-minikube/busybox: (3.424672004s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-962163
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-962163: (7.30259354s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-962163 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0904 06:51:47.491003 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:52:04.418135 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:52:26.528720 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-962163 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (59.148089547s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-962163 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-04 06:52:42.870370393 +0000 UTC m=+3576.208625770
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-962163 -n test-preload-962163
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-962163 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-962163 logs -n 25: (1.004905209s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-140177 ssh -n multinode-140177-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:38 UTC │
	│ ssh     │ multinode-140177 ssh -n multinode-140177 sudo cat /home/docker/cp-test_multinode-140177-m03_multinode-140177.txt                                          │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:38 UTC │
	│ cp      │ multinode-140177 cp multinode-140177-m03:/home/docker/cp-test.txt multinode-140177-m02:/home/docker/cp-test_multinode-140177-m03_multinode-140177-m02.txt │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:38 UTC │
	│ ssh     │ multinode-140177 ssh -n multinode-140177-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:38 UTC │
	│ ssh     │ multinode-140177 ssh -n multinode-140177-m02 sudo cat /home/docker/cp-test_multinode-140177-m03_multinode-140177-m02.txt                                  │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:38 UTC │
	│ node    │ multinode-140177 node stop m03                                                                                                                            │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:38 UTC │
	│ node    │ multinode-140177 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:38 UTC │
	│ node    │ list -p multinode-140177                                                                                                                                  │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │                     │
	│ stop    │ -p multinode-140177                                                                                                                                       │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:38 UTC │ 04 Sep 25 06:41 UTC │
	│ start   │ -p multinode-140177 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:41 UTC │ 04 Sep 25 06:44 UTC │
	│ node    │ list -p multinode-140177                                                                                                                                  │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:44 UTC │                     │
	│ node    │ multinode-140177 node delete m03                                                                                                                          │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:44 UTC │ 04 Sep 25 06:44 UTC │
	│ stop    │ multinode-140177 stop                                                                                                                                     │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:44 UTC │ 04 Sep 25 06:47 UTC │
	│ start   │ -p multinode-140177 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:47 UTC │ 04 Sep 25 06:49 UTC │
	│ node    │ list -p multinode-140177                                                                                                                                  │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │                     │
	│ start   │ -p multinode-140177-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-140177-m02 │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │                     │
	│ start   │ -p multinode-140177-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-140177-m03 │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:49 UTC │
	│ node    │ add -p multinode-140177                                                                                                                                   │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │                     │
	│ delete  │ -p multinode-140177-m03                                                                                                                                   │ multinode-140177-m03 │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:49 UTC │
	│ delete  │ -p multinode-140177                                                                                                                                       │ multinode-140177     │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:49 UTC │
	│ start   │ -p test-preload-962163 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-962163  │ jenkins │ v1.36.0 │ 04 Sep 25 06:49 UTC │ 04 Sep 25 06:51 UTC │
	│ image   │ test-preload-962163 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-962163  │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ stop    │ -p test-preload-962163                                                                                                                                    │ test-preload-962163  │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:51 UTC │
	│ start   │ -p test-preload-962163 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-962163  │ jenkins │ v1.36.0 │ 04 Sep 25 06:51 UTC │ 04 Sep 25 06:52 UTC │
	│ image   │ test-preload-962163 image list                                                                                                                            │ test-preload-962163  │ jenkins │ v1.36.0 │ 04 Sep 25 06:52 UTC │ 04 Sep 25 06:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 06:51:43
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 06:51:43.555660 1152397 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:51:43.555920 1152397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:51:43.555929 1152397 out.go:374] Setting ErrFile to fd 2...
	I0904 06:51:43.555933 1152397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:51:43.556142 1152397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:51:43.556681 1152397 out.go:368] Setting JSON to false
	I0904 06:51:43.557661 1152397 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":16447,"bootTime":1756952257,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:51:43.557724 1152397 start.go:140] virtualization: kvm guest
	I0904 06:51:43.559588 1152397 out.go:179] * [test-preload-962163] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:51:43.560788 1152397 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:51:43.560815 1152397 notify.go:220] Checking for updates...
	I0904 06:51:43.562774 1152397 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:51:43.563893 1152397 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 06:51:43.564890 1152397 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 06:51:43.566074 1152397 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:51:43.567258 1152397 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:51:43.568908 1152397 config.go:182] Loaded profile config "test-preload-962163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0904 06:51:43.569496 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:51:43.569579 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:51:43.585212 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34431
	I0904 06:51:43.585786 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:51:43.586325 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:51:43.586363 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:51:43.586752 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:51:43.586913 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:51:43.588742 1152397 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0904 06:51:43.589929 1152397 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:51:43.590214 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:51:43.590250 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:51:43.605358 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35745
	I0904 06:51:43.605777 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:51:43.606182 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:51:43.606209 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:51:43.606537 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:51:43.606717 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:51:43.641986 1152397 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 06:51:43.643027 1152397 start.go:304] selected driver: kvm2
	I0904 06:51:43.643042 1152397 start.go:918] validating driver "kvm2" against &{Name:test-preload-962163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-962163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:51:43.643130 1152397 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:51:43.643826 1152397 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:51:43.643909 1152397 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1115845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 06:51:43.659271 1152397 install.go:137] /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 06:51:43.659632 1152397 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:51:43.659667 1152397 cni.go:84] Creating CNI manager for ""
	I0904 06:51:43.659722 1152397 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 06:51:43.659776 1152397 start.go:348] cluster config:
	{Name:test-preload-962163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-962163 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:51:43.659894 1152397 iso.go:125] acquiring lock: {Name:mk8046b526ef8e07e7f8bc343ab464442f664799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 06:51:43.661878 1152397 out.go:179] * Starting "test-preload-962163" primary control-plane node in "test-preload-962163" cluster
	I0904 06:51:43.662876 1152397 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0904 06:51:44.045231 1152397 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:51:44.045290 1152397 cache.go:58] Caching tarball of preloaded images
	I0904 06:51:44.045450 1152397 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0904 06:51:44.046932 1152397 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0904 06:51:44.047970 1152397 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 06:51:44.147476 1152397 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0904 06:51:53.748436 1152397 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 06:51:53.748562 1152397 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 06:51:54.501966 1152397 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0904 06:51:54.502147 1152397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/config.json ...
	I0904 06:51:54.502462 1152397 start.go:360] acquireMachinesLock for test-preload-962163: {Name:mk3d0e482c06d0ca53afa1318fbdd30ffc2f15b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 06:51:54.502573 1152397 start.go:364] duration metric: took 80.714µs to acquireMachinesLock for "test-preload-962163"
	I0904 06:51:54.502600 1152397 start.go:96] Skipping create...Using existing machine configuration
	I0904 06:51:54.502609 1152397 fix.go:54] fixHost starting: 
	I0904 06:51:54.502956 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:51:54.503009 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:51:54.518111 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0904 06:51:54.518617 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:51:54.519132 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:51:54.519157 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:51:54.519591 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:51:54.519799 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:51:54.519989 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetState
	I0904 06:51:54.521920 1152397 fix.go:112] recreateIfNeeded on test-preload-962163: state=Stopped err=<nil>
	I0904 06:51:54.521936 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	W0904 06:51:54.522103 1152397 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 06:51:54.524018 1152397 out.go:252] * Restarting existing kvm2 VM for "test-preload-962163" ...
	I0904 06:51:54.524045 1152397 main.go:141] libmachine: (test-preload-962163) Calling .Start
	I0904 06:51:54.524218 1152397 main.go:141] libmachine: (test-preload-962163) starting domain...
	I0904 06:51:54.524236 1152397 main.go:141] libmachine: (test-preload-962163) ensuring networks are active...
	I0904 06:51:54.524926 1152397 main.go:141] libmachine: (test-preload-962163) Ensuring network default is active
	I0904 06:51:54.525243 1152397 main.go:141] libmachine: (test-preload-962163) Ensuring network mk-test-preload-962163 is active
	I0904 06:51:54.525628 1152397 main.go:141] libmachine: (test-preload-962163) getting domain XML...
	I0904 06:51:54.526323 1152397 main.go:141] libmachine: (test-preload-962163) creating domain...
	I0904 06:51:55.730922 1152397 main.go:141] libmachine: (test-preload-962163) waiting for IP...
	I0904 06:51:55.731815 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:55.732237 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:55.732306 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:55.732229 1152465 retry.go:31] will retry after 217.384848ms: waiting for domain to come up
	I0904 06:51:55.951792 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:55.952225 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:55.952269 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:55.952195 1152465 retry.go:31] will retry after 306.040727ms: waiting for domain to come up
	I0904 06:51:56.259876 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:56.260417 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:56.260448 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:56.260395 1152465 retry.go:31] will retry after 409.387439ms: waiting for domain to come up
	I0904 06:51:56.671276 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:56.671767 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:56.671789 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:56.671741 1152465 retry.go:31] will retry after 550.637962ms: waiting for domain to come up
	I0904 06:51:57.224495 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:57.224915 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:57.224943 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:57.224875 1152465 retry.go:31] will retry after 614.75409ms: waiting for domain to come up
	I0904 06:51:57.840896 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:57.841419 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:57.841446 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:57.841389 1152465 retry.go:31] will retry after 761.775446ms: waiting for domain to come up
	I0904 06:51:58.604934 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:58.605253 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:58.605282 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:58.605231 1152465 retry.go:31] will retry after 1.019362665s: waiting for domain to come up
	I0904 06:51:59.626764 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:51:59.627284 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:51:59.627313 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:51:59.627252 1152465 retry.go:31] will retry after 969.402594ms: waiting for domain to come up
	I0904 06:52:00.598344 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:00.598848 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:52:00.598881 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:52:00.598799 1152465 retry.go:31] will retry after 1.314666356s: waiting for domain to come up
	I0904 06:52:01.915119 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:01.915527 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:52:01.915587 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:52:01.915516 1152465 retry.go:31] will retry after 2.018593008s: waiting for domain to come up
	I0904 06:52:03.936775 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:03.937218 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:52:03.937287 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:52:03.937212 1152465 retry.go:31] will retry after 2.413395098s: waiting for domain to come up
	I0904 06:52:06.353316 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:06.353801 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:52:06.353867 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:52:06.353782 1152465 retry.go:31] will retry after 2.482964595s: waiting for domain to come up
	I0904 06:52:08.839351 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:08.839780 1152397 main.go:141] libmachine: (test-preload-962163) DBG | unable to find current IP address of domain test-preload-962163 in network mk-test-preload-962163
	I0904 06:52:08.839810 1152397 main.go:141] libmachine: (test-preload-962163) DBG | I0904 06:52:08.839734 1152465 retry.go:31] will retry after 3.815985079s: waiting for domain to come up
	I0904 06:52:12.659885 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.660250 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has current primary IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.660266 1152397 main.go:141] libmachine: (test-preload-962163) found domain IP: 192.168.39.252
	I0904 06:52:12.660274 1152397 main.go:141] libmachine: (test-preload-962163) reserving static IP address...
	I0904 06:52:12.660689 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "test-preload-962163", mac: "52:54:00:d4:91:fd", ip: "192.168.39.252"} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:12.660706 1152397 main.go:141] libmachine: (test-preload-962163) reserved static IP address 192.168.39.252 for domain test-preload-962163
	I0904 06:52:12.660719 1152397 main.go:141] libmachine: (test-preload-962163) DBG | skip adding static IP to network mk-test-preload-962163 - found existing host DHCP lease matching {name: "test-preload-962163", mac: "52:54:00:d4:91:fd", ip: "192.168.39.252"}
	I0904 06:52:12.660728 1152397 main.go:141] libmachine: (test-preload-962163) DBG | Getting to WaitForSSH function...
	I0904 06:52:12.660733 1152397 main.go:141] libmachine: (test-preload-962163) waiting for SSH...
	I0904 06:52:12.662719 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.663014 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:12.663047 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.663140 1152397 main.go:141] libmachine: (test-preload-962163) DBG | Using SSH client type: external
	I0904 06:52:12.663166 1152397 main.go:141] libmachine: (test-preload-962163) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa (-rw-------)
	I0904 06:52:12.663198 1152397 main.go:141] libmachine: (test-preload-962163) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0904 06:52:12.663212 1152397 main.go:141] libmachine: (test-preload-962163) DBG | About to run SSH command:
	I0904 06:52:12.663225 1152397 main.go:141] libmachine: (test-preload-962163) DBG | exit 0
	I0904 06:52:12.787028 1152397 main.go:141] libmachine: (test-preload-962163) DBG | SSH cmd err, output: <nil>: 
	I0904 06:52:12.787401 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetConfigRaw
	I0904 06:52:12.788023 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetIP
	I0904 06:52:12.790429 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.790744 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:12.790789 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.791065 1152397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/config.json ...
	I0904 06:52:12.791287 1152397 machine.go:93] provisionDockerMachine start ...
	I0904 06:52:12.791311 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:12.791524 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:12.793715 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.794073 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:12.794101 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.794209 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:12.794371 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:12.794524 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:12.794643 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:12.794851 1152397 main.go:141] libmachine: Using SSH client type: native
	I0904 06:52:12.795131 1152397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0904 06:52:12.795144 1152397 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 06:52:12.899100 1152397 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 06:52:12.899133 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetMachineName
	I0904 06:52:12.899408 1152397 buildroot.go:166] provisioning hostname "test-preload-962163"
	I0904 06:52:12.899436 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetMachineName
	I0904 06:52:12.899599 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:12.902419 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.902740 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:12.902769 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:12.902901 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:12.903129 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:12.903277 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:12.903411 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:12.903536 1152397 main.go:141] libmachine: Using SSH client type: native
	I0904 06:52:12.903730 1152397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0904 06:52:12.903742 1152397 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-962163 && echo "test-preload-962163" | sudo tee /etc/hostname
	I0904 06:52:13.022163 1152397 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-962163
	
	I0904 06:52:13.022198 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.024913 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.025278 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.025306 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.025502 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:13.025682 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.025848 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.025950 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:13.026115 1152397 main.go:141] libmachine: Using SSH client type: native
	I0904 06:52:13.026324 1152397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0904 06:52:13.026342 1152397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-962163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-962163/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-962163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 06:52:13.141268 1152397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 06:52:13.141305 1152397 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1115845/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1115845/.minikube}
	I0904 06:52:13.141329 1152397 buildroot.go:174] setting up certificates
	I0904 06:52:13.141339 1152397 provision.go:84] configureAuth start
	I0904 06:52:13.141348 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetMachineName
	I0904 06:52:13.141603 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetIP
	I0904 06:52:13.144396 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.144752 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.144783 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.144936 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.146949 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.147231 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.147267 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.147352 1152397 provision.go:143] copyHostCerts
	I0904 06:52:13.147421 1152397 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem, removing ...
	I0904 06:52:13.147431 1152397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem
	I0904 06:52:13.147499 1152397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem (1082 bytes)
	I0904 06:52:13.147583 1152397 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem, removing ...
	I0904 06:52:13.147590 1152397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem
	I0904 06:52:13.147615 1152397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem (1123 bytes)
	I0904 06:52:13.147670 1152397 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem, removing ...
	I0904 06:52:13.147677 1152397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem
	I0904 06:52:13.147698 1152397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem (1679 bytes)
	I0904 06:52:13.147755 1152397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem org=jenkins.test-preload-962163 san=[127.0.0.1 192.168.39.252 localhost minikube test-preload-962163]
	I0904 06:52:13.303584 1152397 provision.go:177] copyRemoteCerts
	I0904 06:52:13.303662 1152397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 06:52:13.303696 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.306562 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.306892 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.306917 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.307075 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:13.307284 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.307466 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:13.307616 1152397 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa Username:docker}
	I0904 06:52:13.390180 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 06:52:13.416279 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0904 06:52:13.442385 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 06:52:13.468174 1152397 provision.go:87] duration metric: took 326.821603ms to configureAuth
	I0904 06:52:13.468204 1152397 buildroot.go:189] setting minikube options for container-runtime
	I0904 06:52:13.468379 1152397 config.go:182] Loaded profile config "test-preload-962163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0904 06:52:13.468472 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.471013 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.471313 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.471337 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.471555 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:13.471768 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.471927 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.472065 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:13.472193 1152397 main.go:141] libmachine: Using SSH client type: native
	I0904 06:52:13.472382 1152397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0904 06:52:13.472397 1152397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 06:52:13.711550 1152397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 06:52:13.711588 1152397 machine.go:96] duration metric: took 920.286859ms to provisionDockerMachine
	I0904 06:52:13.711599 1152397 start.go:293] postStartSetup for "test-preload-962163" (driver="kvm2")
	I0904 06:52:13.711610 1152397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 06:52:13.711628 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:13.711929 1152397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 06:52:13.711963 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.714410 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.714773 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.714812 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.715056 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:13.715272 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.715462 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:13.715606 1152397 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa Username:docker}
	I0904 06:52:13.798758 1152397 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 06:52:13.803090 1152397 info.go:137] Remote host: Buildroot 2025.02
	I0904 06:52:13.803117 1152397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/addons for local assets ...
	I0904 06:52:13.803197 1152397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/files for local assets ...
	I0904 06:52:13.803273 1152397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem -> 11200742.pem in /etc/ssl/certs
	I0904 06:52:13.803358 1152397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 06:52:13.813949 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 06:52:13.840299 1152397 start.go:296] duration metric: took 128.68199ms for postStartSetup
	I0904 06:52:13.840338 1152397 fix.go:56] duration metric: took 19.337730816s for fixHost
	I0904 06:52:13.840359 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.843222 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.843538 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.843573 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.843761 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:13.843967 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.844137 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.844274 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:13.844414 1152397 main.go:141] libmachine: Using SSH client type: native
	I0904 06:52:13.844615 1152397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.252 22 <nil> <nil>}
	I0904 06:52:13.844626 1152397 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 06:52:13.947692 1152397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756968733.917295584
	
	I0904 06:52:13.947721 1152397 fix.go:216] guest clock: 1756968733.917295584
	I0904 06:52:13.947729 1152397 fix.go:229] Guest: 2025-09-04 06:52:13.917295584 +0000 UTC Remote: 2025-09-04 06:52:13.840342215 +0000 UTC m=+30.321725411 (delta=76.953369ms)
	I0904 06:52:13.947750 1152397 fix.go:200] guest clock delta is within tolerance: 76.953369ms
	I0904 06:52:13.947754 1152397 start.go:83] releasing machines lock for "test-preload-962163", held for 19.445166271s
	I0904 06:52:13.947774 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:13.948027 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetIP
	I0904 06:52:13.950725 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.951077 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.951101 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.951277 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:13.951802 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:13.951986 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:13.952093 1152397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 06:52:13.952151 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.952158 1152397 ssh_runner.go:195] Run: cat /version.json
	I0904 06:52:13.952175 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:13.954704 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.954955 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.955078 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.955106 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.955247 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:13.955334 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:13.955365 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:13.955422 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.955494 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:13.955591 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:13.955652 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:13.955721 1152397 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa Username:docker}
	I0904 06:52:13.955746 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:13.955827 1152397 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa Username:docker}
	I0904 06:52:14.032789 1152397 ssh_runner.go:195] Run: systemctl --version
	I0904 06:52:14.073869 1152397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 06:52:14.215369 1152397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 06:52:14.221613 1152397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 06:52:14.221679 1152397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 06:52:14.241035 1152397 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 06:52:14.241069 1152397 start.go:495] detecting cgroup driver to use...
	I0904 06:52:14.241135 1152397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 06:52:14.260278 1152397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 06:52:14.275469 1152397 docker.go:218] disabling cri-docker service (if available) ...
	I0904 06:52:14.275531 1152397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 06:52:14.289997 1152397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 06:52:14.304203 1152397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 06:52:14.440512 1152397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 06:52:14.574375 1152397 docker.go:234] disabling docker service ...
	I0904 06:52:14.574447 1152397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 06:52:14.591505 1152397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 06:52:14.605852 1152397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 06:52:14.818955 1152397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 06:52:14.958491 1152397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 06:52:14.973719 1152397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 06:52:14.994230 1152397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0904 06:52:14.994342 1152397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:52:15.005596 1152397 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 06:52:15.005670 1152397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:52:15.016581 1152397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:52:15.027902 1152397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:52:15.039358 1152397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 06:52:15.051084 1152397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:52:15.062225 1152397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:52:15.080660 1152397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 06:52:15.092142 1152397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 06:52:15.101735 1152397 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 06:52:15.101793 1152397 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 06:52:15.119539 1152397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 06:52:15.130155 1152397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:52:15.263660 1152397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 06:52:15.371850 1152397 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 06:52:15.371938 1152397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 06:52:15.376851 1152397 start.go:563] Will wait 60s for crictl version
	I0904 06:52:15.376910 1152397 ssh_runner.go:195] Run: which crictl
	I0904 06:52:15.380574 1152397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 06:52:15.418664 1152397 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 06:52:15.418772 1152397 ssh_runner.go:195] Run: crio --version
	I0904 06:52:15.446673 1152397 ssh_runner.go:195] Run: crio --version
	I0904 06:52:15.476459 1152397 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0904 06:52:15.477492 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetIP
	I0904 06:52:15.480108 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:15.480408 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:15.480439 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:15.480611 1152397 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 06:52:15.484777 1152397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:52:15.498090 1152397 kubeadm.go:875] updating cluster {Name:test-preload-962163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-962163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 06:52:15.498214 1152397 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0904 06:52:15.498260 1152397 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:52:15.534655 1152397 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0904 06:52:15.534728 1152397 ssh_runner.go:195] Run: which lz4
	I0904 06:52:15.538631 1152397 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 06:52:15.542676 1152397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 06:52:15.542720 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0904 06:52:16.901494 1152397 crio.go:462] duration metric: took 1.362894601s to copy over tarball
	I0904 06:52:16.901590 1152397 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 06:52:18.613808 1152397 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.712184412s)
	I0904 06:52:18.613845 1152397 crio.go:469] duration metric: took 1.712322161s to extract the tarball
	I0904 06:52:18.613858 1152397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 06:52:18.652953 1152397 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 06:52:18.697017 1152397 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 06:52:18.697044 1152397 cache_images.go:85] Images are preloaded, skipping loading
	I0904 06:52:18.697052 1152397 kubeadm.go:926] updating node { 192.168.39.252 8443 v1.32.0 crio true true} ...
	I0904 06:52:18.697165 1152397 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-962163 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-962163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 06:52:18.697251 1152397 ssh_runner.go:195] Run: crio config
	I0904 06:52:18.741756 1152397 cni.go:84] Creating CNI manager for ""
	I0904 06:52:18.741778 1152397 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 06:52:18.741791 1152397 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 06:52:18.741814 1152397 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.252 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-962163 NodeName:test-preload-962163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 06:52:18.741925 1152397 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-962163"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.252"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.252"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 06:52:18.741986 1152397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0904 06:52:18.753711 1152397 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 06:52:18.753775 1152397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 06:52:18.764218 1152397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0904 06:52:18.782609 1152397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 06:52:18.800558 1152397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0904 06:52:18.818773 1152397 ssh_runner.go:195] Run: grep 192.168.39.252	control-plane.minikube.internal$ /etc/hosts
	I0904 06:52:18.822658 1152397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 06:52:18.836426 1152397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:52:18.971387 1152397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:52:19.010657 1152397 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163 for IP: 192.168.39.252
	I0904 06:52:19.010709 1152397 certs.go:194] generating shared ca certs ...
	I0904 06:52:19.010738 1152397 certs.go:226] acquiring lock for ca certs: {Name:mkb48abb711128619cd278e65e40c326a6b20d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:52:19.010969 1152397 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key
	I0904 06:52:19.011034 1152397 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key
	I0904 06:52:19.011050 1152397 certs.go:256] generating profile certs ...
	I0904 06:52:19.011181 1152397 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/client.key
	I0904 06:52:19.011288 1152397 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/apiserver.key.48f19888
	I0904 06:52:19.011353 1152397 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/proxy-client.key
	I0904 06:52:19.011523 1152397 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem (1338 bytes)
	W0904 06:52:19.011570 1152397 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074_empty.pem, impossibly tiny 0 bytes
	I0904 06:52:19.011588 1152397 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 06:52:19.011621 1152397 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem (1082 bytes)
	I0904 06:52:19.011658 1152397 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem (1123 bytes)
	I0904 06:52:19.011701 1152397 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem (1679 bytes)
	I0904 06:52:19.011757 1152397 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 06:52:19.012643 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 06:52:19.043674 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 06:52:19.077493 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 06:52:19.106089 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 06:52:19.132233 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0904 06:52:19.159271 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 06:52:19.186600 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 06:52:19.212710 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 06:52:19.238526 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /usr/share/ca-certificates/11200742.pem (1708 bytes)
	I0904 06:52:19.264872 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 06:52:19.291456 1152397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem --> /usr/share/ca-certificates/1120074.pem (1338 bytes)
	I0904 06:52:19.317233 1152397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 06:52:19.335347 1152397 ssh_runner.go:195] Run: openssl version
	I0904 06:52:19.341206 1152397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11200742.pem && ln -fs /usr/share/ca-certificates/11200742.pem /etc/ssl/certs/11200742.pem"
	I0904 06:52:19.352985 1152397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11200742.pem
	I0904 06:52:19.357609 1152397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:04 /usr/share/ca-certificates/11200742.pem
	I0904 06:52:19.357671 1152397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11200742.pem
	I0904 06:52:19.364124 1152397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11200742.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 06:52:19.376768 1152397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 06:52:19.388743 1152397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:52:19.393492 1152397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 05:54 /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:52:19.393573 1152397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 06:52:19.400261 1152397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 06:52:19.412041 1152397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120074.pem && ln -fs /usr/share/ca-certificates/1120074.pem /etc/ssl/certs/1120074.pem"
	I0904 06:52:19.423720 1152397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120074.pem
	I0904 06:52:19.428229 1152397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:04 /usr/share/ca-certificates/1120074.pem
	I0904 06:52:19.428289 1152397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120074.pem
	I0904 06:52:19.435032 1152397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120074.pem /etc/ssl/certs/51391683.0"
	I0904 06:52:19.446326 1152397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 06:52:19.450940 1152397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 06:52:19.457728 1152397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 06:52:19.464284 1152397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 06:52:19.471001 1152397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 06:52:19.477414 1152397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 06:52:19.483913 1152397 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 06:52:19.490330 1152397 kubeadm.go:392] StartCluster: {Name:test-preload-962163 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-962163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:52:19.490411 1152397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 06:52:19.490450 1152397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:52:19.526921 1152397 cri.go:89] found id: ""
	I0904 06:52:19.526988 1152397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 06:52:19.538335 1152397 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 06:52:19.538369 1152397 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 06:52:19.538424 1152397 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 06:52:19.549813 1152397 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:52:19.550350 1152397 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-962163" does not appear in /home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 06:52:19.550478 1152397 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-1115845/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-962163" cluster setting kubeconfig missing "test-preload-962163" context setting]
	I0904 06:52:19.550735 1152397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/kubeconfig: {Name:mk586aba4eac8031d07aaf208d256e06f68e9260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:52:19.551292 1152397 kapi.go:59] client config for test-preload-962163: &rest.Config{Host:"https://192.168.39.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 06:52:19.551726 1152397 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0904 06:52:19.551740 1152397 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 06:52:19.551744 1152397 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0904 06:52:19.551748 1152397 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0904 06:52:19.551752 1152397 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 06:52:19.552074 1152397 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 06:52:19.562576 1152397 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.252
	I0904 06:52:19.562614 1152397 kubeadm.go:1152] stopping kube-system containers ...
	I0904 06:52:19.562631 1152397 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0904 06:52:19.562694 1152397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 06:52:19.595679 1152397 cri.go:89] found id: ""
	I0904 06:52:19.595747 1152397 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0904 06:52:19.613070 1152397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 06:52:19.623810 1152397 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 06:52:19.623826 1152397 kubeadm.go:157] found existing configuration files:
	
	I0904 06:52:19.623866 1152397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 06:52:19.633260 1152397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 06:52:19.633325 1152397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 06:52:19.643627 1152397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 06:52:19.653106 1152397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 06:52:19.653149 1152397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 06:52:19.663309 1152397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 06:52:19.672777 1152397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 06:52:19.672830 1152397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 06:52:19.682916 1152397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 06:52:19.692209 1152397 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 06:52:19.692252 1152397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 06:52:19.702239 1152397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 06:52:19.712305 1152397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 06:52:19.764082 1152397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 06:52:20.711868 1152397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0904 06:52:20.953863 1152397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 06:52:21.020147 1152397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0904 06:52:21.102527 1152397 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:52:21.102643 1152397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:52:21.603629 1152397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:52:22.103580 1152397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:52:22.603192 1152397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:52:23.102722 1152397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:52:23.603451 1152397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:52:23.632805 1152397 api_server.go:72] duration metric: took 2.530276451s to wait for apiserver process to appear ...
	I0904 06:52:23.632832 1152397 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:52:23.632854 1152397 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0904 06:52:25.976897 1152397 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0904 06:52:25.976929 1152397 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0904 06:52:25.976949 1152397 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0904 06:52:26.012266 1152397 api_server.go:279] https://192.168.39.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0904 06:52:26.012301 1152397 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0904 06:52:26.133608 1152397 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0904 06:52:26.148984 1152397 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:52:26.149014 1152397 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:52:26.633705 1152397 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0904 06:52:26.638460 1152397 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:52:26.638490 1152397 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:52:27.133165 1152397 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0904 06:52:27.138366 1152397 api_server.go:279] https://192.168.39.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 06:52:27.138399 1152397 api_server.go:103] status: https://192.168.39.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 06:52:27.633021 1152397 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0904 06:52:27.637654 1152397 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0904 06:52:27.644128 1152397 api_server.go:141] control plane version: v1.32.0
	I0904 06:52:27.644157 1152397 api_server.go:131] duration metric: took 4.01131736s to wait for apiserver health ...
	I0904 06:52:27.644169 1152397 cni.go:84] Creating CNI manager for ""
	I0904 06:52:27.644175 1152397 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 06:52:27.646016 1152397 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 06:52:27.647156 1152397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 06:52:27.659804 1152397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 06:52:27.680401 1152397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:52:27.686491 1152397 system_pods.go:59] 7 kube-system pods found
	I0904 06:52:27.686526 1152397 system_pods.go:61] "coredns-668d6bf9bc-hlr5t" [493ae791-7e01-4a38-bad9-fa399b78e64a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 06:52:27.686534 1152397 system_pods.go:61] "etcd-test-preload-962163" [136e3fe1-f466-4255-ae14-4e663aaea1fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:52:27.686542 1152397 system_pods.go:61] "kube-apiserver-test-preload-962163" [1900d406-5217-40ed-b4b7-3d4aeaec3f49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:52:27.686550 1152397 system_pods.go:61] "kube-controller-manager-test-preload-962163" [8066e364-625c-4b89-9f33-c24a257ea657] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 06:52:27.686556 1152397 system_pods.go:61] "kube-proxy-g88zb" [90d9934e-51a2-42dc-8efa-56acb8e8e11a] Running
	I0904 06:52:27.686564 1152397 system_pods.go:61] "kube-scheduler-test-preload-962163" [e76f1ffe-b83a-4182-9f52-af36f1e2bbc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 06:52:27.686573 1152397 system_pods.go:61] "storage-provisioner" [ff631e7a-8b9e-4815-a536-e3e5197db33d] Running
	I0904 06:52:27.686581 1152397 system_pods.go:74] duration metric: took 6.137916ms to wait for pod list to return data ...
	I0904 06:52:27.686594 1152397 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:52:27.690275 1152397 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 06:52:27.690299 1152397 node_conditions.go:123] node cpu capacity is 2
	I0904 06:52:27.690310 1152397 node_conditions.go:105] duration metric: took 3.711511ms to run NodePressure ...
	I0904 06:52:27.690333 1152397 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 06:52:27.942800 1152397 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0904 06:52:27.945974 1152397 kubeadm.go:735] kubelet initialised
	I0904 06:52:27.945993 1152397 kubeadm.go:736] duration metric: took 3.1672ms waiting for restarted kubelet to initialise ...
	I0904 06:52:27.946009 1152397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 06:52:27.960892 1152397 ops.go:34] apiserver oom_adj: -16
	I0904 06:52:27.960914 1152397 kubeadm.go:593] duration metric: took 8.422538871s to restartPrimaryControlPlane
	I0904 06:52:27.960922 1152397 kubeadm.go:394] duration metric: took 8.470600023s to StartCluster
	I0904 06:52:27.960941 1152397 settings.go:142] acquiring lock: {Name:mkb015a02541f006ebfff677085f6c9619eaacb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:52:27.961031 1152397 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 06:52:27.961693 1152397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/kubeconfig: {Name:mk586aba4eac8031d07aaf208d256e06f68e9260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 06:52:27.961956 1152397 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.252 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 06:52:27.962074 1152397 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 06:52:27.962150 1152397 addons.go:69] Setting storage-provisioner=true in profile "test-preload-962163"
	I0904 06:52:27.962169 1152397 addons.go:238] Setting addon storage-provisioner=true in "test-preload-962163"
	W0904 06:52:27.962181 1152397 addons.go:247] addon storage-provisioner should already be in state true
	I0904 06:52:27.962214 1152397 host.go:66] Checking if "test-preload-962163" exists ...
	I0904 06:52:27.962232 1152397 config.go:182] Loaded profile config "test-preload-962163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0904 06:52:27.962248 1152397 addons.go:69] Setting default-storageclass=true in profile "test-preload-962163"
	I0904 06:52:27.962302 1152397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-962163"
	I0904 06:52:27.962579 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:52:27.962633 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:52:27.962749 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:52:27.962787 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:52:27.963700 1152397 out.go:179] * Verifying Kubernetes components...
	I0904 06:52:27.964879 1152397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 06:52:27.977763 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36067
	I0904 06:52:27.977992 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45545
	I0904 06:52:27.978310 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:52:27.978420 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:52:27.978877 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:52:27.978880 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:52:27.978903 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:52:27.978916 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:52:27.979216 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:52:27.979244 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:52:27.979415 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetState
	I0904 06:52:27.979771 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:52:27.979812 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:52:27.981750 1152397 kapi.go:59] client config for test-preload-962163: &rest.Config{Host:"https://192.168.39.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 06:52:27.982027 1152397 addons.go:238] Setting addon default-storageclass=true in "test-preload-962163"
	W0904 06:52:27.982056 1152397 addons.go:247] addon default-storageclass should already be in state true
	I0904 06:52:27.982086 1152397 host.go:66] Checking if "test-preload-962163" exists ...
	I0904 06:52:27.982386 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:52:27.982442 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:52:27.994865 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0904 06:52:27.995403 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:52:27.995891 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:52:27.995915 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:52:27.996262 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:52:27.996471 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetState
	I0904 06:52:27.997932 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:27.999630 1152397 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 06:52:28.000798 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0904 06:52:28.000809 1152397 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:52:28.000826 1152397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 06:52:28.000846 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:28.001323 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:52:28.001829 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:52:28.001891 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:52:28.002277 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:52:28.002905 1152397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:52:28.002957 1152397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:52:28.004067 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:28.004570 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:28.004599 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:28.004762 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:28.004945 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:28.005110 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:28.005247 1152397 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa Username:docker}
	I0904 06:52:28.017500 1152397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I0904 06:52:28.018023 1152397 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:52:28.018526 1152397 main.go:141] libmachine: Using API Version  1
	I0904 06:52:28.018549 1152397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:52:28.018943 1152397 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:52:28.019114 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetState
	I0904 06:52:28.020485 1152397 main.go:141] libmachine: (test-preload-962163) Calling .DriverName
	I0904 06:52:28.020699 1152397 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 06:52:28.020719 1152397 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 06:52:28.020744 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHHostname
	I0904 06:52:28.023512 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:28.024003 1152397 main.go:141] libmachine: (test-preload-962163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:91:fd", ip: ""} in network mk-test-preload-962163: {Iface:virbr1 ExpiryTime:2025-09-04 07:52:05 +0000 UTC Type:0 Mac:52:54:00:d4:91:fd Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:test-preload-962163 Clientid:01:52:54:00:d4:91:fd}
	I0904 06:52:28.024035 1152397 main.go:141] libmachine: (test-preload-962163) DBG | domain test-preload-962163 has defined IP address 192.168.39.252 and MAC address 52:54:00:d4:91:fd in network mk-test-preload-962163
	I0904 06:52:28.024191 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHPort
	I0904 06:52:28.024380 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHKeyPath
	I0904 06:52:28.024560 1152397 main.go:141] libmachine: (test-preload-962163) Calling .GetSSHUsername
	I0904 06:52:28.024701 1152397 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/test-preload-962163/id_rsa Username:docker}
	I0904 06:52:28.186362 1152397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 06:52:28.204708 1152397 node_ready.go:35] waiting up to 6m0s for node "test-preload-962163" to be "Ready" ...
	I0904 06:52:28.325661 1152397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 06:52:28.369811 1152397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 06:52:29.000920 1152397 main.go:141] libmachine: Making call to close driver server
	I0904 06:52:29.000945 1152397 main.go:141] libmachine: (test-preload-962163) Calling .Close
	I0904 06:52:29.001007 1152397 main.go:141] libmachine: Making call to close driver server
	I0904 06:52:29.001036 1152397 main.go:141] libmachine: (test-preload-962163) Calling .Close
	I0904 06:52:29.001271 1152397 main.go:141] libmachine: Successfully made call to close driver server
	I0904 06:52:29.001291 1152397 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 06:52:29.001300 1152397 main.go:141] libmachine: Making call to close driver server
	I0904 06:52:29.001306 1152397 main.go:141] libmachine: (test-preload-962163) Calling .Close
	I0904 06:52:29.001370 1152397 main.go:141] libmachine: (test-preload-962163) DBG | Closing plugin on server side
	I0904 06:52:29.001374 1152397 main.go:141] libmachine: (test-preload-962163) DBG | Closing plugin on server side
	I0904 06:52:29.001391 1152397 main.go:141] libmachine: Successfully made call to close driver server
	I0904 06:52:29.001397 1152397 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 06:52:29.001404 1152397 main.go:141] libmachine: Making call to close driver server
	I0904 06:52:29.001411 1152397 main.go:141] libmachine: (test-preload-962163) Calling .Close
	I0904 06:52:29.001550 1152397 main.go:141] libmachine: Successfully made call to close driver server
	I0904 06:52:29.001570 1152397 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 06:52:29.002985 1152397 main.go:141] libmachine: Successfully made call to close driver server
	I0904 06:52:29.003008 1152397 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 06:52:29.003064 1152397 main.go:141] libmachine: (test-preload-962163) DBG | Closing plugin on server side
	I0904 06:52:29.010623 1152397 main.go:141] libmachine: Making call to close driver server
	I0904 06:52:29.010644 1152397 main.go:141] libmachine: (test-preload-962163) Calling .Close
	I0904 06:52:29.010883 1152397 main.go:141] libmachine: Successfully made call to close driver server
	I0904 06:52:29.010900 1152397 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 06:52:29.010914 1152397 main.go:141] libmachine: (test-preload-962163) DBG | Closing plugin on server side
	I0904 06:52:29.012208 1152397 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0904 06:52:29.013511 1152397 addons.go:514] duration metric: took 1.051445824s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0904 06:52:30.209042 1152397 node_ready.go:57] node "test-preload-962163" has "Ready":"False" status (will retry)
	W0904 06:52:32.708762 1152397 node_ready.go:57] node "test-preload-962163" has "Ready":"False" status (will retry)
	W0904 06:52:34.708895 1152397 node_ready.go:57] node "test-preload-962163" has "Ready":"False" status (will retry)
	I0904 06:52:36.707909 1152397 node_ready.go:49] node "test-preload-962163" is "Ready"
	I0904 06:52:36.707950 1152397 node_ready.go:38] duration metric: took 8.503182641s for node "test-preload-962163" to be "Ready" ...
	I0904 06:52:36.707969 1152397 api_server.go:52] waiting for apiserver process to appear ...
	I0904 06:52:36.708033 1152397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:52:36.727011 1152397 api_server.go:72] duration metric: took 8.7650151s to wait for apiserver process to appear ...
	I0904 06:52:36.727043 1152397 api_server.go:88] waiting for apiserver healthz status ...
	I0904 06:52:36.727063 1152397 api_server.go:253] Checking apiserver healthz at https://192.168.39.252:8443/healthz ...
	I0904 06:52:36.733541 1152397 api_server.go:279] https://192.168.39.252:8443/healthz returned 200:
	ok
	I0904 06:52:36.734495 1152397 api_server.go:141] control plane version: v1.32.0
	I0904 06:52:36.734510 1152397 api_server.go:131] duration metric: took 7.461462ms to wait for apiserver health ...
	I0904 06:52:36.734518 1152397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 06:52:36.737902 1152397 system_pods.go:59] 7 kube-system pods found
	I0904 06:52:36.737933 1152397 system_pods.go:61] "coredns-668d6bf9bc-hlr5t" [493ae791-7e01-4a38-bad9-fa399b78e64a] Running
	I0904 06:52:36.737947 1152397 system_pods.go:61] "etcd-test-preload-962163" [136e3fe1-f466-4255-ae14-4e663aaea1fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:52:36.737957 1152397 system_pods.go:61] "kube-apiserver-test-preload-962163" [1900d406-5217-40ed-b4b7-3d4aeaec3f49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:52:36.737969 1152397 system_pods.go:61] "kube-controller-manager-test-preload-962163" [8066e364-625c-4b89-9f33-c24a257ea657] Running
	I0904 06:52:36.737975 1152397 system_pods.go:61] "kube-proxy-g88zb" [90d9934e-51a2-42dc-8efa-56acb8e8e11a] Running
	I0904 06:52:36.737982 1152397 system_pods.go:61] "kube-scheduler-test-preload-962163" [e76f1ffe-b83a-4182-9f52-af36f1e2bbc4] Running
	I0904 06:52:36.737990 1152397 system_pods.go:61] "storage-provisioner" [ff631e7a-8b9e-4815-a536-e3e5197db33d] Running
	I0904 06:52:36.737998 1152397 system_pods.go:74] duration metric: took 3.472683ms to wait for pod list to return data ...
	I0904 06:52:36.738009 1152397 default_sa.go:34] waiting for default service account to be created ...
	I0904 06:52:36.740049 1152397 default_sa.go:45] found service account: "default"
	I0904 06:52:36.740068 1152397 default_sa.go:55] duration metric: took 2.050148ms for default service account to be created ...
	I0904 06:52:36.740076 1152397 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 06:52:36.743170 1152397 system_pods.go:86] 7 kube-system pods found
	I0904 06:52:36.743190 1152397 system_pods.go:89] "coredns-668d6bf9bc-hlr5t" [493ae791-7e01-4a38-bad9-fa399b78e64a] Running
	I0904 06:52:36.743198 1152397 system_pods.go:89] "etcd-test-preload-962163" [136e3fe1-f466-4255-ae14-4e663aaea1fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 06:52:36.743222 1152397 system_pods.go:89] "kube-apiserver-test-preload-962163" [1900d406-5217-40ed-b4b7-3d4aeaec3f49] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 06:52:36.743233 1152397 system_pods.go:89] "kube-controller-manager-test-preload-962163" [8066e364-625c-4b89-9f33-c24a257ea657] Running
	I0904 06:52:36.743238 1152397 system_pods.go:89] "kube-proxy-g88zb" [90d9934e-51a2-42dc-8efa-56acb8e8e11a] Running
	I0904 06:52:36.743241 1152397 system_pods.go:89] "kube-scheduler-test-preload-962163" [e76f1ffe-b83a-4182-9f52-af36f1e2bbc4] Running
	I0904 06:52:36.743244 1152397 system_pods.go:89] "storage-provisioner" [ff631e7a-8b9e-4815-a536-e3e5197db33d] Running
	I0904 06:52:36.743256 1152397 system_pods.go:126] duration metric: took 3.169686ms to wait for k8s-apps to be running ...
	I0904 06:52:36.743265 1152397 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 06:52:36.743313 1152397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:52:36.762964 1152397 system_svc.go:56] duration metric: took 19.687054ms WaitForService to wait for kubelet
	I0904 06:52:36.763004 1152397 kubeadm.go:578] duration metric: took 8.801012725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 06:52:36.763021 1152397 node_conditions.go:102] verifying NodePressure condition ...
	I0904 06:52:36.765798 1152397 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 06:52:36.765827 1152397 node_conditions.go:123] node cpu capacity is 2
	I0904 06:52:36.765847 1152397 node_conditions.go:105] duration metric: took 2.818919ms to run NodePressure ...
	I0904 06:52:36.765863 1152397 start.go:241] waiting for startup goroutines ...
	I0904 06:52:36.765874 1152397 start.go:246] waiting for cluster config update ...
	I0904 06:52:36.765890 1152397 start.go:255] writing updated cluster config ...
	I0904 06:52:36.766177 1152397 ssh_runner.go:195] Run: rm -f paused
	I0904 06:52:36.770655 1152397 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:52:36.771104 1152397 kapi.go:59] client config for test-preload-962163: &rest.Config{Host:"https://192.168.39.252:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/test-preload-962163/client.key", CAFile:"/home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 06:52:36.773548 1152397 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-hlr5t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:36.777441 1152397 pod_ready.go:94] pod "coredns-668d6bf9bc-hlr5t" is "Ready"
	I0904 06:52:36.777458 1152397 pod_ready.go:86] duration metric: took 3.890099ms for pod "coredns-668d6bf9bc-hlr5t" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:36.779459 1152397 pod_ready.go:83] waiting for pod "etcd-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 06:52:38.784957 1152397 pod_ready.go:104] pod "etcd-test-preload-962163" is not "Ready", error: <nil>
	W0904 06:52:40.785378 1152397 pod_ready.go:104] pod "etcd-test-preload-962163" is not "Ready", error: <nil>
	I0904 06:52:41.785560 1152397 pod_ready.go:94] pod "etcd-test-preload-962163" is "Ready"
	I0904 06:52:41.785587 1152397 pod_ready.go:86] duration metric: took 5.006113024s for pod "etcd-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:41.787690 1152397 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:41.791442 1152397 pod_ready.go:94] pod "kube-apiserver-test-preload-962163" is "Ready"
	I0904 06:52:41.791458 1152397 pod_ready.go:86] duration metric: took 3.750058ms for pod "kube-apiserver-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:41.793350 1152397 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:41.797335 1152397 pod_ready.go:94] pod "kube-controller-manager-test-preload-962163" is "Ready"
	I0904 06:52:41.797358 1152397 pod_ready.go:86] duration metric: took 3.98059ms for pod "kube-controller-manager-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:41.799086 1152397 pod_ready.go:83] waiting for pod "kube-proxy-g88zb" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:41.984365 1152397 pod_ready.go:94] pod "kube-proxy-g88zb" is "Ready"
	I0904 06:52:41.984393 1152397 pod_ready.go:86] duration metric: took 185.290386ms for pod "kube-proxy-g88zb" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:42.183505 1152397 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:42.583758 1152397 pod_ready.go:94] pod "kube-scheduler-test-preload-962163" is "Ready"
	I0904 06:52:42.583794 1152397 pod_ready.go:86] duration metric: took 400.263276ms for pod "kube-scheduler-test-preload-962163" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 06:52:42.583809 1152397 pod_ready.go:40] duration metric: took 5.813128771s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 06:52:42.627764 1152397 start.go:617] kubectl: 1.33.2, cluster: 1.32.0 (minor skew: 1)
	I0904 06:52:42.630187 1152397 out.go:179] * Done! kubectl is now configured to use "test-preload-962163" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.499113409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968763499088032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9100f19c-6c96-46f4-9956-285e710dc143 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.499600685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=947ccab9-79a0-403c-9f90-4cc8c7e1fac3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.499663874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=947ccab9-79a0-403c-9f90-4cc8c7e1fac3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.499869089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06107a1e91c4fce9c5555b6a201fb39108064345bd673d7cd838b583d9454aa1,PodSandboxId:cabb84cf2eaa745dd1c244ca78becd4966a7b2022d7a82950154f59f648d2406,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1756968754052222646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlr5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 493ae791-7e01-4a38-bad9-fa399b78e64a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23adaf0c65fa9f1661027547b2a36bf2f5e756225eeded0342515d1407dd877,PodSandboxId:003d2c747a02f61ddd624fedbe03984510054b6023cd95d70f761ce44ef13250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1756968746452611707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g88zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 90d9934e-51a2-42dc-8efa-56acb8e8e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba27a79c4a04211cf30857023eda0e6fd3d278666de79c45776db160c9003d2,PodSandboxId:287c68ae84b5ac26546076cc1ac13d7ca30958bf97ade664ac6c77b5362af256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756968746448619678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
631e7a-8b9e-4815-a536-e3e5197db33d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69c3fb4cc540a55c26ea204827551e4d63f9d98cdad5653a4128340c0c2038,PodSandboxId:b85abe06233a781bf34c3b3d2ae27f73f6bf0f4469e72c7b9f91cce4a14a4424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1756968743235805553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397a2183d
d5f9589f2cf9e1e632a58ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e1f74a09e19285be0e5db790b20b5782721b4371e5f3c013bbc031ec46ad00,PodSandboxId:5784e0063e3e2d511ef4d0cbd5da6b96b00ade67c225b43ea710a0f3a4907642,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1756968743238928225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 6e6f22c38922e932d7a6ba130575d3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:728ac53929da18a557a3f0fb1ce56cf307cf3ad8aef591e66ab1affc2a36b01a,PodSandboxId:e0763783465d7d9c39bd1e8e58fa3dea68dd06a40a35d8425042a13b5e2ababa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1756968743198699524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107ce6fdeeea32bfb5af990ba0a32423,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985f8565239fa05ee2586dc15789e3c6fd8147c3805a33adfeebcb8c36bdb092,PodSandboxId:25c208cc7de6ada32d10f700210343c06777180464069d3b33aa4cba4b1d2665,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1756968743194132165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e909057baba0bb29e823f51a23dd7b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=947ccab9-79a0-403c-9f90-4cc8c7e1fac3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.537482915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b55d5eba-25bc-4285-a2e2-0f5bdda75f48 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.537575749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b55d5eba-25bc-4285-a2e2-0f5bdda75f48 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.538528074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5e46e76-d99c-4991-bcb2-eb2cded89d2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.539001826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968763538978283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5e46e76-d99c-4991-bcb2-eb2cded89d2d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.539541741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5a559ca-fad8-4601-898c-9e0613826d92 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.539612580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5a559ca-fad8-4601-898c-9e0613826d92 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.539772321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06107a1e91c4fce9c5555b6a201fb39108064345bd673d7cd838b583d9454aa1,PodSandboxId:cabb84cf2eaa745dd1c244ca78becd4966a7b2022d7a82950154f59f648d2406,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1756968754052222646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlr5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 493ae791-7e01-4a38-bad9-fa399b78e64a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23adaf0c65fa9f1661027547b2a36bf2f5e756225eeded0342515d1407dd877,PodSandboxId:003d2c747a02f61ddd624fedbe03984510054b6023cd95d70f761ce44ef13250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1756968746452611707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g88zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 90d9934e-51a2-42dc-8efa-56acb8e8e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba27a79c4a04211cf30857023eda0e6fd3d278666de79c45776db160c9003d2,PodSandboxId:287c68ae84b5ac26546076cc1ac13d7ca30958bf97ade664ac6c77b5362af256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756968746448619678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
631e7a-8b9e-4815-a536-e3e5197db33d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69c3fb4cc540a55c26ea204827551e4d63f9d98cdad5653a4128340c0c2038,PodSandboxId:b85abe06233a781bf34c3b3d2ae27f73f6bf0f4469e72c7b9f91cce4a14a4424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1756968743235805553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397a2183d
d5f9589f2cf9e1e632a58ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e1f74a09e19285be0e5db790b20b5782721b4371e5f3c013bbc031ec46ad00,PodSandboxId:5784e0063e3e2d511ef4d0cbd5da6b96b00ade67c225b43ea710a0f3a4907642,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1756968743238928225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 6e6f22c38922e932d7a6ba130575d3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:728ac53929da18a557a3f0fb1ce56cf307cf3ad8aef591e66ab1affc2a36b01a,PodSandboxId:e0763783465d7d9c39bd1e8e58fa3dea68dd06a40a35d8425042a13b5e2ababa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1756968743198699524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107ce6fdeeea32bfb5af990ba0a32423,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985f8565239fa05ee2586dc15789e3c6fd8147c3805a33adfeebcb8c36bdb092,PodSandboxId:25c208cc7de6ada32d10f700210343c06777180464069d3b33aa4cba4b1d2665,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1756968743194132165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e909057baba0bb29e823f51a23dd7b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5a559ca-fad8-4601-898c-9e0613826d92 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.576613215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=135cb12a-675a-4347-bd75-9c41e971ee81 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.576683785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=135cb12a-675a-4347-bd75-9c41e971ee81 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.577757665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5172795a-a2d7-4a72-b4a3-bb1093d8b54e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.578208785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968763578185406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5172795a-a2d7-4a72-b4a3-bb1093d8b54e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.578660231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8530b596-17cd-47f0-8b63-ca30034f14f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.578775512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8530b596-17cd-47f0-8b63-ca30034f14f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.579292527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06107a1e91c4fce9c5555b6a201fb39108064345bd673d7cd838b583d9454aa1,PodSandboxId:cabb84cf2eaa745dd1c244ca78becd4966a7b2022d7a82950154f59f648d2406,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1756968754052222646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlr5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 493ae791-7e01-4a38-bad9-fa399b78e64a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23adaf0c65fa9f1661027547b2a36bf2f5e756225eeded0342515d1407dd877,PodSandboxId:003d2c747a02f61ddd624fedbe03984510054b6023cd95d70f761ce44ef13250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1756968746452611707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g88zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 90d9934e-51a2-42dc-8efa-56acb8e8e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba27a79c4a04211cf30857023eda0e6fd3d278666de79c45776db160c9003d2,PodSandboxId:287c68ae84b5ac26546076cc1ac13d7ca30958bf97ade664ac6c77b5362af256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756968746448619678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
631e7a-8b9e-4815-a536-e3e5197db33d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69c3fb4cc540a55c26ea204827551e4d63f9d98cdad5653a4128340c0c2038,PodSandboxId:b85abe06233a781bf34c3b3d2ae27f73f6bf0f4469e72c7b9f91cce4a14a4424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1756968743235805553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397a2183d
d5f9589f2cf9e1e632a58ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e1f74a09e19285be0e5db790b20b5782721b4371e5f3c013bbc031ec46ad00,PodSandboxId:5784e0063e3e2d511ef4d0cbd5da6b96b00ade67c225b43ea710a0f3a4907642,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1756968743238928225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 6e6f22c38922e932d7a6ba130575d3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:728ac53929da18a557a3f0fb1ce56cf307cf3ad8aef591e66ab1affc2a36b01a,PodSandboxId:e0763783465d7d9c39bd1e8e58fa3dea68dd06a40a35d8425042a13b5e2ababa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1756968743198699524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107ce6fdeeea32bfb5af990ba0a32423,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985f8565239fa05ee2586dc15789e3c6fd8147c3805a33adfeebcb8c36bdb092,PodSandboxId:25c208cc7de6ada32d10f700210343c06777180464069d3b33aa4cba4b1d2665,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1756968743194132165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e909057baba0bb29e823f51a23dd7b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8530b596-17cd-47f0-8b63-ca30034f14f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.611884574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=783b8324-4f95-466f-b0f1-55654639b9e2 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.611963727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=783b8324-4f95-466f-b0f1-55654639b9e2 name=/runtime.v1.RuntimeService/Version
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.613193343Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9efe5571-063f-4e25-baa5-a3c046c91944 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.613745444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968763613593027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9efe5571-063f-4e25-baa5-a3c046c91944 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.614390899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=358718ef-85b3-4faf-97aa-d3b79a20c465 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.614517367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=358718ef-85b3-4faf-97aa-d3b79a20c465 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 06:52:43 test-preload-962163 crio[840]: time="2025-09-04 06:52:43.615097249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:06107a1e91c4fce9c5555b6a201fb39108064345bd673d7cd838b583d9454aa1,PodSandboxId:cabb84cf2eaa745dd1c244ca78becd4966a7b2022d7a82950154f59f648d2406,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1756968754052222646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hlr5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 493ae791-7e01-4a38-bad9-fa399b78e64a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23adaf0c65fa9f1661027547b2a36bf2f5e756225eeded0342515d1407dd877,PodSandboxId:003d2c747a02f61ddd624fedbe03984510054b6023cd95d70f761ce44ef13250,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1756968746452611707,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g88zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 90d9934e-51a2-42dc-8efa-56acb8e8e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba27a79c4a04211cf30857023eda0e6fd3d278666de79c45776db160c9003d2,PodSandboxId:287c68ae84b5ac26546076cc1ac13d7ca30958bf97ade664ac6c77b5362af256,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756968746448619678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff
631e7a-8b9e-4815-a536-e3e5197db33d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f69c3fb4cc540a55c26ea204827551e4d63f9d98cdad5653a4128340c0c2038,PodSandboxId:b85abe06233a781bf34c3b3d2ae27f73f6bf0f4469e72c7b9f91cce4a14a4424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1756968743235805553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397a2183d
d5f9589f2cf9e1e632a58ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e1f74a09e19285be0e5db790b20b5782721b4371e5f3c013bbc031ec46ad00,PodSandboxId:5784e0063e3e2d511ef4d0cbd5da6b96b00ade67c225b43ea710a0f3a4907642,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1756968743238928225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 6e6f22c38922e932d7a6ba130575d3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:728ac53929da18a557a3f0fb1ce56cf307cf3ad8aef591e66ab1affc2a36b01a,PodSandboxId:e0763783465d7d9c39bd1e8e58fa3dea68dd06a40a35d8425042a13b5e2ababa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1756968743198699524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 107ce6fdeeea32bfb5af990ba0a32423,}
,Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985f8565239fa05ee2586dc15789e3c6fd8147c3805a33adfeebcb8c36bdb092,PodSandboxId:25c208cc7de6ada32d10f700210343c06777180464069d3b33aa4cba4b1d2665,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1756968743194132165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-962163,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e909057baba0bb29e823f51a23dd7b,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=358718ef-85b3-4faf-97aa-d3b79a20c465 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	06107a1e91c4f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 seconds ago       Running             coredns                   1                   cabb84cf2eaa7       coredns-668d6bf9bc-hlr5t
	b23adaf0c65fa       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 seconds ago      Running             kube-proxy                1                   003d2c747a02f       kube-proxy-g88zb
	7ba27a79c4a04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       1                   287c68ae84b5a       storage-provisioner
	32e1f74a09e19       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   5784e0063e3e2       kube-controller-manager-test-preload-962163
	4f69c3fb4cc54       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   b85abe06233a7       kube-scheduler-test-preload-962163
	728ac53929da1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   e0763783465d7       etcd-test-preload-962163
	985f8565239fa       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   25c208cc7de6a       kube-apiserver-test-preload-962163
	
	
	==> coredns [06107a1e91c4fce9c5555b6a201fb39108064345bd673d7cd838b583d9454aa1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40676 - 25350 "HINFO IN 5405737572391522100.8928134555934414656. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.062432147s
	
	
	==> describe nodes <==
	Name:               test-preload-962163
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-962163
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff
	                    minikube.k8s.io/name=test-preload-962163
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T06_50_47_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 06:50:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-962163
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 06:52:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 06:52:36 +0000   Thu, 04 Sep 2025 06:50:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 06:52:36 +0000   Thu, 04 Sep 2025 06:50:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 06:52:36 +0000   Thu, 04 Sep 2025 06:50:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 06:52:36 +0000   Thu, 04 Sep 2025 06:52:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    test-preload-962163
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 667b2eae48744bc1aff73474adfe13f3
	  System UUID:                667b2eae-4874-4bc1-aff7-3474adfe13f3
	  Boot ID:                    c4037c99-87ef-496b-ac47-7b707345cd7a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-hlr5t                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     111s
	  kube-system                 etcd-test-preload-962163                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         116s
	  kube-system                 kube-apiserver-test-preload-962163             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-test-preload-962163    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-g88zb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-test-preload-962163             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 17s                  kube-proxy       
	  Normal   Starting                 110s                 kube-proxy       
	  Normal   NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node test-preload-962163 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node test-preload-962163 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node test-preload-962163 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node test-preload-962163 status is now: NodeHasSufficientMemory
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node test-preload-962163 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node test-preload-962163 status is now: NodeHasSufficientPID
	  Normal   NodeReady                115s                 kubelet          Node test-preload-962163 status is now: NodeReady
	  Normal   RegisteredNode           112s                 node-controller  Node test-preload-962163 event: Registered Node test-preload-962163 in Controller
	  Normal   CIDRAssignmentFailed     112s                 cidrAllocator    Node test-preload-962163 status is now: CIDRAssignmentFailed
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-962163 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-962163 status is now: NodeHasSufficientMemory
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-962163 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-962163 has been rebooted, boot id: c4037c99-87ef-496b-ac47-7b707345cd7a
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-962163 event: Registered Node test-preload-962163 in Controller
	
	
	==> dmesg <==
	[Sep 4 06:51] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Sep 4 06:52] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002544] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.976520] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083679] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.091258] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.487161] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.001032] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [728ac53929da18a557a3f0fb1ce56cf307cf3ad8aef591e66ab1affc2a36b01a] <==
	{"level":"info","ts":"2025-09-04T06:52:23.547948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"537123dbb156f37f switched to configuration voters=(6012626403996398463)"}
	{"level":"info","ts":"2025-09-04T06:52:23.548011Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4977c847fd6b5c16","local-member-id":"537123dbb156f37f","added-peer-id":"537123dbb156f37f","added-peer-peer-urls":["https://192.168.39.252:2380"]}
	{"level":"info","ts":"2025-09-04T06:52:23.548116Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4977c847fd6b5c16","local-member-id":"537123dbb156f37f","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-04T06:52:23.548151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-04T06:52:23.560483Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-04T06:52:23.561725Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"537123dbb156f37f","initial-advertise-peer-urls":["https://192.168.39.252:2380"],"listen-peer-urls":["https://192.168.39.252:2380"],"advertise-client-urls":["https://192.168.39.252:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.252:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-04T06:52:23.561999Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-04T06:52:23.562863Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.252:2380"}
	{"level":"info","ts":"2025-09-04T06:52:23.565421Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.252:2380"}
	{"level":"info","ts":"2025-09-04T06:52:24.923009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"537123dbb156f37f is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-04T06:52:24.923098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"537123dbb156f37f became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-04T06:52:24.923117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"537123dbb156f37f received MsgPreVoteResp from 537123dbb156f37f at term 2"}
	{"level":"info","ts":"2025-09-04T06:52:24.923128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"537123dbb156f37f became candidate at term 3"}
	{"level":"info","ts":"2025-09-04T06:52:24.923135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"537123dbb156f37f received MsgVoteResp from 537123dbb156f37f at term 3"}
	{"level":"info","ts":"2025-09-04T06:52:24.923143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"537123dbb156f37f became leader at term 3"}
	{"level":"info","ts":"2025-09-04T06:52:24.923150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 537123dbb156f37f elected leader 537123dbb156f37f at term 3"}
	{"level":"info","ts":"2025-09-04T06:52:24.925471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T06:52:24.925637Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-04T06:52:24.925944Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-04T06:52:24.925959Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-04T06:52:24.925475Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"537123dbb156f37f","local-member-attributes":"{Name:test-preload-962163 ClientURLs:[https://192.168.39.252:2379]}","request-path":"/0/members/537123dbb156f37f/attributes","cluster-id":"4977c847fd6b5c16","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-04T06:52:24.926376Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-04T06:52:24.926434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-04T06:52:24.927055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.252:2379"}
	{"level":"info","ts":"2025-09-04T06:52:24.927063Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 06:52:43 up 0 min,  0 users,  load average: 1.49, 0.39, 0.13
	Linux test-preload-962163 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Sep  3 00:15:45 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [985f8565239fa05ee2586dc15789e3c6fd8147c3805a33adfeebcb8c36bdb092] <==
	I0904 06:52:26.017971       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0904 06:52:26.018215       1 aggregator.go:171] initial CRD sync complete...
	I0904 06:52:26.018245       1 autoregister_controller.go:144] Starting autoregister controller
	I0904 06:52:26.018251       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0904 06:52:26.047060       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0904 06:52:26.069444       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0904 06:52:26.069500       1 policy_source.go:240] refreshing policies
	I0904 06:52:26.113996       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0904 06:52:26.114288       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 06:52:26.115185       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0904 06:52:26.115218       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0904 06:52:26.114330       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0904 06:52:26.116724       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 06:52:26.143340       1 cache.go:39] Caches are synced for autoregister controller
	I0904 06:52:26.146499       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0904 06:52:26.154069       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0904 06:52:26.208491       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0904 06:52:26.915051       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 06:52:27.751539       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0904 06:52:27.784453       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0904 06:52:27.805336       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 06:52:27.811166       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 06:52:29.248083       1 controller.go:615] quota admission added evaluator for: endpoints
	I0904 06:52:29.496769       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 06:52:29.698980       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [32e1f74a09e19285be0e5db790b20b5782721b4371e5f3c013bbc031ec46ad00] <==
	I0904 06:52:29.245443       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0904 06:52:29.246033       1 shared_informer.go:320] Caches are synced for PVC protection
	I0904 06:52:29.246511       1 shared_informer.go:320] Caches are synced for crt configmap
	I0904 06:52:29.247857       1 shared_informer.go:320] Caches are synced for expand
	I0904 06:52:29.249022       1 shared_informer.go:320] Caches are synced for daemon sets
	I0904 06:52:29.249909       1 shared_informer.go:320] Caches are synced for resource quota
	I0904 06:52:29.266404       1 shared_informer.go:320] Caches are synced for garbage collector
	I0904 06:52:29.268670       1 shared_informer.go:320] Caches are synced for namespace
	I0904 06:52:29.269877       1 shared_informer.go:320] Caches are synced for service account
	I0904 06:52:29.273136       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0904 06:52:29.278691       1 shared_informer.go:320] Caches are synced for disruption
	I0904 06:52:29.280923       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0904 06:52:29.283169       1 shared_informer.go:320] Caches are synced for taint
	I0904 06:52:29.283255       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 06:52:29.283335       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-962163"
	I0904 06:52:29.283376       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0904 06:52:29.301887       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-962163"
	I0904 06:52:29.704597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="431.365275ms"
	I0904 06:52:29.704677       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.216µs"
	I0904 06:52:34.177559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.655µs"
	I0904 06:52:35.193112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.585239ms"
	I0904 06:52:35.193239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="85.398µs"
	I0904 06:52:36.447410       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-962163"
	I0904 06:52:36.458906       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-962163"
	I0904 06:52:39.285251       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b23adaf0c65fa9f1661027547b2a36bf2f5e756225eeded0342515d1407dd877] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0904 06:52:26.616692       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0904 06:52:26.625766       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.252"]
	E0904 06:52:26.625933       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 06:52:26.654865       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0904 06:52:26.654908       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 06:52:26.654991       1 server_linux.go:170] "Using iptables Proxier"
	I0904 06:52:26.657745       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 06:52:26.658149       1 server.go:497] "Version info" version="v1.32.0"
	I0904 06:52:26.658194       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:52:26.659568       1 config.go:199] "Starting service config controller"
	I0904 06:52:26.659624       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 06:52:26.659667       1 config.go:105] "Starting endpoint slice config controller"
	I0904 06:52:26.659683       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 06:52:26.662135       1 config.go:329] "Starting node config controller"
	I0904 06:52:26.662178       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 06:52:26.759766       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0904 06:52:26.759766       1 shared_informer.go:320] Caches are synced for service config
	I0904 06:52:26.762966       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4f69c3fb4cc540a55c26ea204827551e4d63f9d98cdad5653a4128340c0c2038] <==
	I0904 06:52:24.474316       1 serving.go:386] Generated self-signed cert in-memory
	W0904 06:52:26.003967       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 06:52:26.004056       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 06:52:26.004090       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 06:52:26.004146       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 06:52:26.046410       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0904 06:52:26.051893       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 06:52:26.054078       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 06:52:26.054632       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0904 06:52:26.057059       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0904 06:52:26.057161       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 06:52:26.156610       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.011747    1160 apiserver.go:52] "Watching apiserver"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: E0904 06:52:26.036292    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-hlr5t" podUID="493ae791-7e01-4a38-bad9-fa399b78e64a"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.130030    1160 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.172482    1160 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-962163"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.172582    1160 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-962163"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.172606    1160 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.175218    1160 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.176108    1160 setters.go:602] "Node became not ready" node="test-preload-962163" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T06:52:26Z","lastTransitionTime":"2025-09-04T06:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.204443    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ff631e7a-8b9e-4815-a536-e3e5197db33d-tmp\") pod \"storage-provisioner\" (UID: \"ff631e7a-8b9e-4815-a536-e3e5197db33d\") " pod="kube-system/storage-provisioner"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.204507    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90d9934e-51a2-42dc-8efa-56acb8e8e11a-xtables-lock\") pod \"kube-proxy-g88zb\" (UID: \"90d9934e-51a2-42dc-8efa-56acb8e8e11a\") " pod="kube-system/kube-proxy-g88zb"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: I0904 06:52:26.204526    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90d9934e-51a2-42dc-8efa-56acb8e8e11a-lib-modules\") pod \"kube-proxy-g88zb\" (UID: \"90d9934e-51a2-42dc-8efa-56acb8e8e11a\") " pod="kube-system/kube-proxy-g88zb"
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: E0904 06:52:26.204961    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: E0904 06:52:26.207064    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume podName:493ae791-7e01-4a38-bad9-fa399b78e64a nodeName:}" failed. No retries permitted until 2025-09-04 06:52:26.705205619 +0000 UTC m=+5.786474131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume") pod "coredns-668d6bf9bc-hlr5t" (UID: "493ae791-7e01-4a38-bad9-fa399b78e64a") : object "kube-system"/"coredns" not registered
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: E0904 06:52:26.708433    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 06:52:26 test-preload-962163 kubelet[1160]: E0904 06:52:26.708514    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume podName:493ae791-7e01-4a38-bad9-fa399b78e64a nodeName:}" failed. No retries permitted until 2025-09-04 06:52:27.708502401 +0000 UTC m=+6.789770914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume") pod "coredns-668d6bf9bc-hlr5t" (UID: "493ae791-7e01-4a38-bad9-fa399b78e64a") : object "kube-system"/"coredns" not registered
	Sep 04 06:52:27 test-preload-962163 kubelet[1160]: E0904 06:52:27.715749    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 06:52:27 test-preload-962163 kubelet[1160]: E0904 06:52:27.715894    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume podName:493ae791-7e01-4a38-bad9-fa399b78e64a nodeName:}" failed. No retries permitted until 2025-09-04 06:52:29.715857724 +0000 UTC m=+8.797126238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume") pod "coredns-668d6bf9bc-hlr5t" (UID: "493ae791-7e01-4a38-bad9-fa399b78e64a") : object "kube-system"/"coredns" not registered
	Sep 04 06:52:28 test-preload-962163 kubelet[1160]: E0904 06:52:28.051459    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-hlr5t" podUID="493ae791-7e01-4a38-bad9-fa399b78e64a"
	Sep 04 06:52:29 test-preload-962163 kubelet[1160]: E0904 06:52:29.732058    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 04 06:52:29 test-preload-962163 kubelet[1160]: E0904 06:52:29.732139    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume podName:493ae791-7e01-4a38-bad9-fa399b78e64a nodeName:}" failed. No retries permitted until 2025-09-04 06:52:33.732126387 +0000 UTC m=+12.813394911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/493ae791-7e01-4a38-bad9-fa399b78e64a-config-volume") pod "coredns-668d6bf9bc-hlr5t" (UID: "493ae791-7e01-4a38-bad9-fa399b78e64a") : object "kube-system"/"coredns" not registered
	Sep 04 06:52:30 test-preload-962163 kubelet[1160]: E0904 06:52:30.051206    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-hlr5t" podUID="493ae791-7e01-4a38-bad9-fa399b78e64a"
	Sep 04 06:52:31 test-preload-962163 kubelet[1160]: E0904 06:52:31.090911    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968751090578779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 06:52:31 test-preload-962163 kubelet[1160]: E0904 06:52:31.090930    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968751090578779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 06:52:41 test-preload-962163 kubelet[1160]: E0904 06:52:41.092414    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968761092144994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 04 06:52:41 test-preload-962163 kubelet[1160]: E0904 06:52:41.092984    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756968761092144994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7ba27a79c4a04211cf30857023eda0e6fd3d278666de79c45776db160c9003d2] <==
	I0904 06:52:26.535642       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-962163 -n test-preload-962163
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-962163 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-962163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-962163
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-962163: (1.176801049s)
--- FAIL: TestPreload (174.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (67.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324880 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-324880 --driver=kvm2  --container-runtime=crio: signal: killed (1m4.339284262s)

                                                
                                                
-- stdout --
	* [NoKubernetes-324880] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-324880

                                                
                                                
-- /stdout --
no_kubernetes_test.go:195: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-324880 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-324880 -n NoKubernetes-324880
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-324880 -n NoKubernetes-324880: exit status 3 (3.175460656s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0904 06:59:44.047165 1160229 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	E0904 06:59:44.047188 1160229 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 3 (may be ok)
helpers_test.go:249: "NoKubernetes-324880" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (67.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-017566 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-017566 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.76069085s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-017566] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-017566" primary control-plane node in "pause-017566" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-017566" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 07:01:09.902408 1161732 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:01:09.903196 1161732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:01:09.903260 1161732 out.go:374] Setting ErrFile to fd 2...
	I0904 07:01:09.903277 1161732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:01:09.903760 1161732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 07:01:09.904952 1161732 out.go:368] Setting JSON to false
	I0904 07:01:09.906065 1161732 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":17013,"bootTime":1756952257,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 07:01:09.906178 1161732 start.go:140] virtualization: kvm guest
	I0904 07:01:09.907805 1161732 out.go:179] * [pause-017566] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 07:01:09.908976 1161732 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:01:09.909010 1161732 notify.go:220] Checking for updates...
	I0904 07:01:09.910997 1161732 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:01:09.912148 1161732 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 07:01:09.913170 1161732 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 07:01:09.914073 1161732 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 07:01:09.915035 1161732 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:01:09.916362 1161732 config.go:182] Loaded profile config "pause-017566": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:09.916829 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:09.916879 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:09.934800 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0904 07:01:09.935395 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:09.935984 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:09.936016 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:09.936409 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:09.936644 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:09.936927 1161732 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:01:09.937246 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:09.937293 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:09.952626 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0904 07:01:09.953214 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:09.953784 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:09.953816 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:09.954335 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:09.954553 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:09.993954 1161732 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 07:01:09.995109 1161732 start.go:304] selected driver: kvm2
	I0904 07:01:09.995132 1161732 start.go:918] validating driver "kvm2" against &{Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:09.995321 1161732 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:01:09.995816 1161732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:01:09.995920 1161732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1115845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 07:01:10.019030 1161732 install.go:137] /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 07:01:10.020243 1161732 cni.go:84] Creating CNI manager for ""
	I0904 07:01:10.020329 1161732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:10.020416 1161732 start.go:348] cluster config:
	{Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:10.020664 1161732 iso.go:125] acquiring lock: {Name:mk8046b526ef8e07e7f8bc343ab464442f664799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:01:10.022309 1161732 out.go:179] * Starting "pause-017566" primary control-plane node in "pause-017566" cluster
	I0904 07:01:10.023281 1161732 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:10.023331 1161732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 07:01:10.023355 1161732 cache.go:58] Caching tarball of preloaded images
	I0904 07:01:10.023454 1161732 preload.go:172] Found /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 07:01:10.023469 1161732 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 07:01:10.023627 1161732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/config.json ...
	I0904 07:01:10.023895 1161732 start.go:360] acquireMachinesLock for pause-017566: {Name:mk3d0e482c06d0ca53afa1318fbdd30ffc2f15b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 07:01:27.112230 1161732 start.go:364] duration metric: took 17.088294365s to acquireMachinesLock for "pause-017566"
	I0904 07:01:27.112296 1161732 start.go:96] Skipping create...Using existing machine configuration
	I0904 07:01:27.112305 1161732 fix.go:54] fixHost starting: 
	I0904 07:01:27.112765 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:27.112831 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:27.132201 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0904 07:01:27.132672 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:27.133209 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:27.133241 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:27.133709 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:27.133962 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:27.134143 1161732 main.go:141] libmachine: (pause-017566) Calling .GetState
	I0904 07:01:27.136167 1161732 fix.go:112] recreateIfNeeded on pause-017566: state=Running err=<nil>
	W0904 07:01:27.136193 1161732 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 07:01:27.138268 1161732 out.go:252] * Updating the running kvm2 "pause-017566" VM ...
	I0904 07:01:27.138298 1161732 machine.go:93] provisionDockerMachine start ...
	I0904 07:01:27.138313 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:27.138518 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.141213 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.141742 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.141767 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.142052 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.142211 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.142329 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.142435 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.142642 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.142939 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.142951 1161732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 07:01:27.264475 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-017566
	
	I0904 07:01:27.264509 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.264829 1161732 buildroot.go:166] provisioning hostname "pause-017566"
	I0904 07:01:27.264868 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.265100 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.268258 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.268727 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.268755 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.268949 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.269134 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.269298 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.269460 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.269625 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.269851 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.269866 1161732 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-017566 && echo "pause-017566" | sudo tee /etc/hostname
	I0904 07:01:27.402385 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-017566
	
	I0904 07:01:27.402422 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.406417 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.406873 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.406898 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.407170 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.407411 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.407590 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.407783 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.408014 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.408402 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.408442 1161732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-017566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-017566/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-017566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 07:01:27.536058 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:01:27.536105 1161732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1115845/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1115845/.minikube}
	I0904 07:01:27.536138 1161732 buildroot.go:174] setting up certificates
	I0904 07:01:27.536156 1161732 provision.go:84] configureAuth start
	I0904 07:01:27.536176 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.536479 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:27.539375 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.539785 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.539812 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.540030 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.542344 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.542629 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.542667 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.542888 1161732 provision.go:143] copyHostCerts
	I0904 07:01:27.542988 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem, removing ...
	I0904 07:01:27.543011 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem
	I0904 07:01:27.543079 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem (1679 bytes)
	I0904 07:01:27.543198 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem, removing ...
	I0904 07:01:27.543210 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem
	I0904 07:01:27.543244 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem (1082 bytes)
	I0904 07:01:27.543319 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem, removing ...
	I0904 07:01:27.543330 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem
	I0904 07:01:27.543357 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem (1123 bytes)
	I0904 07:01:27.543418 1161732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem org=jenkins.pause-017566 san=[127.0.0.1 192.168.39.168 localhost minikube pause-017566]
	I0904 07:01:27.703670 1161732 provision.go:177] copyRemoteCerts
	I0904 07:01:27.703728 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 07:01:27.703755 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.706487 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.706849 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.706884 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.707049 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.707243 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.707437 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.707651 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:27.798776 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 07:01:27.833553 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0904 07:01:27.865385 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 07:01:27.894589 1161732 provision.go:87] duration metric: took 358.411244ms to configureAuth
	I0904 07:01:27.894626 1161732 buildroot.go:189] setting minikube options for container-runtime
	I0904 07:01:27.894995 1161732 config.go:182] Loaded profile config "pause-017566": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:27.895097 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.898221 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.898667 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.898715 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.898935 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.899156 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.899364 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.899545 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.899735 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.899945 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.899959 1161732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 07:01:33.515165 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 07:01:33.515196 1161732 machine.go:96] duration metric: took 6.376888505s to provisionDockerMachine
	I0904 07:01:33.515212 1161732 start.go:293] postStartSetup for "pause-017566" (driver="kvm2")
	I0904 07:01:33.515226 1161732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 07:01:33.515249 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.515626 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 07:01:33.515661 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.519114 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.519592 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.519624 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.519795 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.519977 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.520206 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.520390 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.610679 1161732 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 07:01:33.616704 1161732 info.go:137] Remote host: Buildroot 2025.02
	I0904 07:01:33.616739 1161732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/addons for local assets ...
	I0904 07:01:33.616814 1161732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/files for local assets ...
	I0904 07:01:33.616905 1161732 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem -> 11200742.pem in /etc/ssl/certs
	I0904 07:01:33.617040 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 07:01:33.631551 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:33.665307 1161732 start.go:296] duration metric: took 150.079866ms for postStartSetup
	I0904 07:01:33.665355 1161732 fix.go:56] duration metric: took 6.553050716s for fixHost
	I0904 07:01:33.665388 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.669609 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.670031 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.670076 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.670271 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.670479 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.670680 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.670879 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.671044 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:33.671293 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:33.671311 1161732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 07:01:33.787999 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756969293.783889209
	
	I0904 07:01:33.788029 1161732 fix.go:216] guest clock: 1756969293.783889209
	I0904 07:01:33.788040 1161732 fix.go:229] Guest: 2025-09-04 07:01:33.783889209 +0000 UTC Remote: 2025-09-04 07:01:33.665366067 +0000 UTC m=+23.813013966 (delta=118.523142ms)
	I0904 07:01:33.788068 1161732 fix.go:200] guest clock delta is within tolerance: 118.523142ms
	I0904 07:01:33.788076 1161732 start.go:83] releasing machines lock for "pause-017566", held for 6.675805339s
	I0904 07:01:33.788102 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.788408 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:33.791521 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.791914 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.791977 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.792095 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792611 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792808 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792932 1161732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 07:01:33.792992 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.793044 1161732 ssh_runner.go:195] Run: cat /version.json
	I0904 07:01:33.793087 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.795985 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796378 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.796407 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796428 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796674 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.796854 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.796939 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.796976 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.797029 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.797123 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.797170 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.797245 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.797390 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.797564 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.916461 1161732 ssh_runner.go:195] Run: systemctl --version
	I0904 07:01:33.922526 1161732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 07:01:34.076454 1161732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 07:01:34.087525 1161732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 07:01:34.087620 1161732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:01:34.098978 1161732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 07:01:34.099005 1161732 start.go:495] detecting cgroup driver to use...
	I0904 07:01:34.099086 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 07:01:34.120306 1161732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 07:01:34.137553 1161732 docker.go:218] disabling cri-docker service (if available) ...
	I0904 07:01:34.137664 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 07:01:34.154114 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 07:01:34.169285 1161732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 07:01:34.345407 1161732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 07:01:34.520424 1161732 docker.go:234] disabling docker service ...
	I0904 07:01:34.520502 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 07:01:34.550550 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 07:01:34.565558 1161732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 07:01:34.746021 1161732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 07:01:34.918646 1161732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 07:01:34.936473 1161732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 07:01:34.964184 1161732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 07:01:34.964265 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:34.976814 1161732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 07:01:34.976888 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:34.989396 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.002104 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.014978 1161732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 07:01:35.027454 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.044316 1161732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.058383 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.070619 1161732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 07:01:35.081214 1161732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 07:01:35.096031 1161732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:35.271583 1161732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 07:01:39.545989 1161732 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.274359891s)
	I0904 07:01:39.546026 1161732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 07:01:39.546098 1161732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 07:01:39.551592 1161732 start.go:563] Will wait 60s for crictl version
	I0904 07:01:39.551658 1161732 ssh_runner.go:195] Run: which crictl
	I0904 07:01:39.555911 1161732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 07:01:39.593817 1161732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 07:01:39.593911 1161732 ssh_runner.go:195] Run: crio --version
	I0904 07:01:39.623039 1161732 ssh_runner.go:195] Run: crio --version
	I0904 07:01:39.661659 1161732 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 07:01:39.662705 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:39.666104 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:39.666530 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:39.666563 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:39.666943 1161732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 07:01:39.672719 1161732 kubeadm.go:875] updating cluster {Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 07:01:39.672897 1161732 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:39.672947 1161732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:39.714651 1161732 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:39.714676 1161732 crio.go:433] Images already preloaded, skipping extraction
	I0904 07:01:39.714749 1161732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:39.751978 1161732 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:39.752004 1161732 cache_images.go:85] Images are preloaded, skipping loading
	I0904 07:01:39.752012 1161732 kubeadm.go:926] updating node { 192.168.39.168 8443 v1.34.0 crio true true} ...
	I0904 07:01:39.752114 1161732 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-017566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 07:01:39.752179 1161732 ssh_runner.go:195] Run: crio config
	I0904 07:01:39.795416 1161732 cni.go:84] Creating CNI manager for ""
	I0904 07:01:39.795443 1161732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:39.795458 1161732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 07:01:39.795500 1161732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-017566 NodeName:pause-017566 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 07:01:39.795668 1161732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-017566"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.168"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 07:01:39.795740 1161732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 07:01:39.807142 1161732 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 07:01:39.807227 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 07:01:39.818028 1161732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0904 07:01:39.841592 1161732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 07:01:39.863014 1161732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0904 07:01:39.882663 1161732 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0904 07:01:39.886632 1161732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:40.059102 1161732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:01:40.075459 1161732 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566 for IP: 192.168.39.168
	I0904 07:01:40.075502 1161732 certs.go:194] generating shared ca certs ...
	I0904 07:01:40.075538 1161732 certs.go:226] acquiring lock for ca certs: {Name:mkb48abb711128619cd278e65e40c326a6b20d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:40.075768 1161732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key
	I0904 07:01:40.075842 1161732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key
	I0904 07:01:40.075862 1161732 certs.go:256] generating profile certs ...
	I0904 07:01:40.075981 1161732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/client.key
	I0904 07:01:40.076067 1161732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.key.46bf764b
	I0904 07:01:40.076144 1161732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.key
	I0904 07:01:40.076287 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem (1338 bytes)
	W0904 07:01:40.076327 1161732 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074_empty.pem, impossibly tiny 0 bytes
	I0904 07:01:40.076340 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 07:01:40.076373 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem (1082 bytes)
	I0904 07:01:40.076404 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem (1123 bytes)
	I0904 07:01:40.076436 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem (1679 bytes)
	I0904 07:01:40.076497 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:40.077172 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 07:01:40.108154 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 07:01:40.136983 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 07:01:40.167004 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 07:01:40.199411 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 07:01:40.229354 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 07:01:40.263364 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 07:01:40.294718 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 07:01:40.329466 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /usr/share/ca-certificates/11200742.pem (1708 bytes)
	I0904 07:01:40.363576 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 07:01:40.396318 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem --> /usr/share/ca-certificates/1120074.pem (1338 bytes)
	I0904 07:01:40.430931 1161732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 07:01:40.452998 1161732 ssh_runner.go:195] Run: openssl version
	I0904 07:01:40.461063 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11200742.pem && ln -fs /usr/share/ca-certificates/11200742.pem /etc/ssl/certs/11200742.pem"
	I0904 07:01:40.477331 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.492886 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:04 /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.493057 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.508368 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11200742.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 07:01:40.573215 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 07:01:40.592349 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.603505 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 05:54 /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.603580 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.621795 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 07:01:40.656205 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120074.pem && ln -fs /usr/share/ca-certificates/1120074.pem /etc/ssl/certs/1120074.pem"
	I0904 07:01:40.689203 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.700628 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:04 /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.700733 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.718305 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120074.pem /etc/ssl/certs/51391683.0"
	I0904 07:01:40.748388 1161732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 07:01:40.764024 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 07:01:40.790149 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 07:01:40.806535 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 07:01:40.822778 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 07:01:40.836036 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 07:01:40.848094 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 07:01:40.861650 1161732 kubeadm.go:392] StartCluster: {Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:40.861903 1161732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 07:01:40.862007 1161732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 07:01:40.952656 1161732 cri.go:89] found id: "7bd228eee0c8478996d5e834f0e01320ec10565c851fb545d08f599c036f664e"
	I0904 07:01:40.952687 1161732 cri.go:89] found id: "bb4a7e0352be4102c6ffc78172d580c052dba2d2803d939ac1ad23e45e8677ca"
	I0904 07:01:40.952692 1161732 cri.go:89] found id: "0b029332740d46dc6f0939ada2079b4939254cb16a68486524aa04a27a2b6bcf"
	I0904 07:01:40.952697 1161732 cri.go:89] found id: "b880e684a6e0d5818a2df4915f902ea1940a2b8fab778c808806680aa4d82037"
	I0904 07:01:40.952702 1161732 cri.go:89] found id: "143324528cf349785e87b806fa537a8990761956d653c2efad7cbd0eba68feb9"
	I0904 07:01:40.952707 1161732 cri.go:89] found id: "6f3f77c12db6e0e60d13e8d3c64818d2d235cc405b125f184aa5dc00f939cd6a"
	I0904 07:01:40.952711 1161732 cri.go:89] found id: ""
	I0904 07:01:40.952765 1161732 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-017566 -n pause-017566
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-017566 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-017566 logs -n 25: (1.791586346s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-324880 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:57 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-177439 │ jenkins │ v1.36.0 │ 04 Sep 25 06:57 UTC │                     │
	│ start   │ -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-177439 │ jenkins │ v1.36.0 │ 04 Sep 25 06:57 UTC │ 04 Sep 25 07:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-798275 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-798275    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ delete  │ -p stopped-upgrade-798275                                                                                                                                                                                               │ stopped-upgrade-798275    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p cert-expiration-986529 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-986529    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:59 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-050549 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-050549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ delete  │ -p running-upgrade-050549                                                                                                                                                                                               │ running-upgrade-050549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p force-systemd-flag-969000 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-969000 │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:59 UTC │
	│ ssh     │ -p NoKubernetes-324880 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ stop    │ -p NoKubernetes-324880                                                                                                                                                                                                  │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p NoKubernetes-324880 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ delete  │ -p NoKubernetes-324880                                                                                                                                                                                                  │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 06:59 UTC │
	│ start   │ -p pause-017566 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-017566              │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 07:01 UTC │
	│ ssh     │ force-systemd-flag-969000 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-969000 │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 06:59 UTC │
	│ delete  │ -p force-systemd-flag-969000                                                                                                                                                                                            │ force-systemd-flag-969000 │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 06:59 UTC │
	│ start   │ -p cert-options-153188 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 07:00 UTC │
	│ delete  │ -p kubernetes-upgrade-177439                                                                                                                                                                                            │ kubernetes-upgrade-177439 │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ start   │ -p auto-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-644084               │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:01 UTC │
	│ ssh     │ cert-options-153188 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ ssh     │ -p cert-options-153188 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ delete  │ -p cert-options-153188                                                                                                                                                                                                  │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ start   │ -p kindnet-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-644084            │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │                     │
	│ start   │ -p pause-017566 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-017566              │ jenkins │ v1.36.0 │ 04 Sep 25 07:01 UTC │ 04 Sep 25 07:02 UTC │
	│ ssh     │ -p auto-644084 pgrep -a kubelet                                                                                                                                                                                         │ auto-644084               │ jenkins │ v1.36.0 │ 04 Sep 25 07:01 UTC │ 04 Sep 25 07:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 07:01:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 07:01:09.902408 1161732 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:01:09.903196 1161732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:01:09.903260 1161732 out.go:374] Setting ErrFile to fd 2...
	I0904 07:01:09.903277 1161732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:01:09.903760 1161732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 07:01:09.904952 1161732 out.go:368] Setting JSON to false
	I0904 07:01:09.906065 1161732 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":17013,"bootTime":1756952257,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 07:01:09.906178 1161732 start.go:140] virtualization: kvm guest
	I0904 07:01:09.907805 1161732 out.go:179] * [pause-017566] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 07:01:09.908976 1161732 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:01:09.909010 1161732 notify.go:220] Checking for updates...
	I0904 07:01:09.910997 1161732 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:01:09.912148 1161732 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 07:01:09.913170 1161732 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 07:01:09.914073 1161732 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 07:01:09.915035 1161732 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:01:09.916362 1161732 config.go:182] Loaded profile config "pause-017566": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:09.916829 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:09.916879 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:09.934800 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0904 07:01:09.935395 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:09.935984 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:09.936016 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:09.936409 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:09.936644 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:09.936927 1161732 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:01:09.937246 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:09.937293 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:09.952626 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0904 07:01:09.953214 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:09.953784 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:09.953816 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:09.954335 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:09.954553 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:09.993954 1161732 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 07:01:09.995109 1161732 start.go:304] selected driver: kvm2
	I0904 07:01:09.995132 1161732 start.go:918] validating driver "kvm2" against &{Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:09.995321 1161732 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:01:09.995816 1161732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:01:09.995920 1161732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1115845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 07:01:10.019030 1161732 install.go:137] /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 07:01:10.020243 1161732 cni.go:84] Creating CNI manager for ""
	I0904 07:01:10.020329 1161732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:10.020416 1161732 start.go:348] cluster config:
	{Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:10.020664 1161732 iso.go:125] acquiring lock: {Name:mk8046b526ef8e07e7f8bc343ab464442f664799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:01:10.022309 1161732 out.go:179] * Starting "pause-017566" primary control-plane node in "pause-017566" cluster
	I0904 07:01:08.460360 1161036 out.go:252]   - Generating certificates and keys ...
	I0904 07:01:08.460553 1161036 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 07:01:08.460651 1161036 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 07:01:08.538889 1161036 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 07:01:08.809600 1161036 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 07:01:09.114655 1161036 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 07:01:09.744611 1161036 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 07:01:10.137279 1161036 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 07:01:10.137551 1161036 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-644084 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0904 07:01:10.197031 1161036 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 07:01:10.197229 1161036 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-644084 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0904 07:01:10.306155 1161036 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 07:01:10.365532 1161036 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 07:01:10.570379 1161036 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 07:01:10.570496 1161036 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 07:01:10.621046 1161036 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 07:01:11.024853 1161036 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 07:01:11.448309 1161036 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 07:01:11.496168 1161036 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 07:01:11.620120 1161036 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 07:01:11.620869 1161036 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 07:01:11.623044 1161036 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 07:01:09.416227 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:09.416748 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:09.416801 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:09.416733 1161573 retry.go:31] will retry after 2.148885028s: waiting for domain to come up
	I0904 07:01:11.567679 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:11.568392 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:11.568439 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:11.568313 1161573 retry.go:31] will retry after 1.910963226s: waiting for domain to come up
	I0904 07:01:10.023281 1161732 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:10.023331 1161732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 07:01:10.023355 1161732 cache.go:58] Caching tarball of preloaded images
	I0904 07:01:10.023454 1161732 preload.go:172] Found /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 07:01:10.023469 1161732 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 07:01:10.023627 1161732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/config.json ...
	I0904 07:01:10.023895 1161732 start.go:360] acquireMachinesLock for pause-017566: {Name:mk3d0e482c06d0ca53afa1318fbdd30ffc2f15b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 07:01:11.624627 1161036 out.go:252]   - Booting up control plane ...
	I0904 07:01:11.624773 1161036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 07:01:11.624899 1161036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 07:01:11.625030 1161036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 07:01:11.653918 1161036 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 07:01:11.654124 1161036 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 07:01:11.666350 1161036 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 07:01:11.668822 1161036 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 07:01:11.668917 1161036 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 07:01:11.866416 1161036 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 07:01:11.866608 1161036 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 07:01:12.866545 1161036 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001211929s
	I0904 07:01:12.869311 1161036 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 07:01:12.869486 1161036 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.61.91:8443/livez
	I0904 07:01:12.869626 1161036 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 07:01:12.869755 1161036 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 07:01:13.481370 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:13.482131 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:13.482165 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:13.482080 1161573 retry.go:31] will retry after 2.962922625s: waiting for domain to come up
	I0904 07:01:16.446718 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:16.447198 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:16.447274 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:16.447188 1161573 retry.go:31] will retry after 4.019296735s: waiting for domain to come up
	I0904 07:01:15.249136 1161036 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.381239637s
	I0904 07:01:16.611320 1161036 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.744293495s
	I0904 07:01:18.367678 1161036 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.501606561s
	I0904 07:01:18.381492 1161036 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 07:01:18.392820 1161036 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 07:01:18.403474 1161036 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 07:01:18.403754 1161036 kubeadm.go:310] [mark-control-plane] Marking the node auto-644084 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 07:01:18.413012 1161036 kubeadm.go:310] [bootstrap-token] Using token: r8f1gr.b3hnw7k15x3h1e9w
	I0904 07:01:18.414245 1161036 out.go:252]   - Configuring RBAC rules ...
	I0904 07:01:18.414400 1161036 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 07:01:18.421710 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 07:01:18.428567 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 07:01:18.434116 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 07:01:18.441965 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 07:01:18.448501 1161036 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 07:01:18.776213 1161036 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 07:01:19.213174 1161036 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 07:01:19.773634 1161036 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 07:01:19.775511 1161036 kubeadm.go:310] 
	I0904 07:01:19.775626 1161036 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 07:01:19.775639 1161036 kubeadm.go:310] 
	I0904 07:01:19.775761 1161036 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 07:01:19.775778 1161036 kubeadm.go:310] 
	I0904 07:01:19.775816 1161036 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 07:01:19.775904 1161036 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 07:01:19.775994 1161036 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 07:01:19.776020 1161036 kubeadm.go:310] 
	I0904 07:01:19.776118 1161036 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 07:01:19.776134 1161036 kubeadm.go:310] 
	I0904 07:01:19.776202 1161036 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 07:01:19.776213 1161036 kubeadm.go:310] 
	I0904 07:01:19.776302 1161036 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 07:01:19.776424 1161036 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 07:01:19.776532 1161036 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 07:01:19.776568 1161036 kubeadm.go:310] 
	I0904 07:01:19.776697 1161036 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 07:01:19.776792 1161036 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 07:01:19.776802 1161036 kubeadm.go:310] 
	I0904 07:01:19.776923 1161036 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r8f1gr.b3hnw7k15x3h1e9w \
	I0904 07:01:19.777075 1161036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2651308ab51fc83fc020f40c2b31f227a6667a51808f73ed273560ac054e9c36 \
	I0904 07:01:19.777130 1161036 kubeadm.go:310] 	--control-plane 
	I0904 07:01:19.777148 1161036 kubeadm.go:310] 
	I0904 07:01:19.777260 1161036 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 07:01:19.777270 1161036 kubeadm.go:310] 
	I0904 07:01:19.777378 1161036 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r8f1gr.b3hnw7k15x3h1e9w \
	I0904 07:01:19.777556 1161036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2651308ab51fc83fc020f40c2b31f227a6667a51808f73ed273560ac054e9c36 
	I0904 07:01:19.777724 1161036 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 07:01:19.777739 1161036 cni.go:84] Creating CNI manager for ""
	I0904 07:01:19.777750 1161036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:19.779624 1161036 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 07:01:19.780667 1161036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 07:01:19.795052 1161036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 07:01:19.815906 1161036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 07:01:19.816037 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:19.816077 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-644084 minikube.k8s.io/updated_at=2025_09_04T07_01_19_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff minikube.k8s.io/name=auto-644084 minikube.k8s.io/primary=true
	I0904 07:01:19.861305 1161036 ops.go:34] apiserver oom_adj: -16
	I0904 07:01:19.977682 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:20.471560 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:20.471979 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:20.472003 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:20.471957 1161573 retry.go:31] will retry after 4.751158317s: waiting for domain to come up
	I0904 07:01:20.478299 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:20.978055 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:21.477975 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:21.978494 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:22.478619 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:22.978475 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:23.477941 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:23.978064 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:24.478754 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:24.608139 1161036 kubeadm.go:1105] duration metric: took 4.79217007s to wait for elevateKubeSystemPrivileges
	I0904 07:01:24.608193 1161036 kubeadm.go:394] duration metric: took 16.627710729s to StartCluster
	I0904 07:01:24.608222 1161036 settings.go:142] acquiring lock: {Name:mkb015a02541f006ebfff677085f6c9619eaacb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:24.608314 1161036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 07:01:24.609492 1161036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/kubeconfig: {Name:mk586aba4eac8031d07aaf208d256e06f68e9260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:24.609734 1161036 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 07:01:24.609747 1161036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 07:01:24.609768 1161036 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 07:01:24.609879 1161036 addons.go:69] Setting storage-provisioner=true in profile "auto-644084"
	I0904 07:01:24.609885 1161036 addons.go:69] Setting default-storageclass=true in profile "auto-644084"
	I0904 07:01:24.609906 1161036 addons.go:238] Setting addon storage-provisioner=true in "auto-644084"
	I0904 07:01:24.609946 1161036 host.go:66] Checking if "auto-644084" exists ...
	I0904 07:01:24.609907 1161036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-644084"
	I0904 07:01:24.610029 1161036 config.go:182] Loaded profile config "auto-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:24.610489 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.610536 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.610489 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.610680 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.611332 1161036 out.go:179] * Verifying Kubernetes components...
	I0904 07:01:24.612700 1161036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:24.626703 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0904 07:01:24.626720 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0904 07:01:24.627295 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.627365 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.627859 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.627883 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.628011 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.628031 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.628267 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.628396 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.628433 1161036 main.go:141] libmachine: (auto-644084) Calling .GetState
	I0904 07:01:24.629001 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.629043 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.632023 1161036 addons.go:238] Setting addon default-storageclass=true in "auto-644084"
	I0904 07:01:24.632060 1161036 host.go:66] Checking if "auto-644084" exists ...
	I0904 07:01:24.632316 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.632357 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.646228 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0904 07:01:24.646795 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.647447 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.647477 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.647954 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.648206 1161036 main.go:141] libmachine: (auto-644084) Calling .GetState
	I0904 07:01:24.649230 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0904 07:01:24.649825 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.650212 1161036 main.go:141] libmachine: (auto-644084) Calling .DriverName
	I0904 07:01:24.650412 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.650441 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.650906 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.651382 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.651417 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.652124 1161036 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 07:01:24.653279 1161036 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 07:01:24.653303 1161036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 07:01:24.653327 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHHostname
	I0904 07:01:24.656786 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.657202 1161036 main.go:141] libmachine: (auto-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b9:91", ip: ""} in network mk-auto-644084: {Iface:virbr3 ExpiryTime:2025-09-04 08:00:52 +0000 UTC Type:0 Mac:52:54:00:d7:b9:91 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:auto-644084 Clientid:01:52:54:00:d7:b9:91}
	I0904 07:01:24.657231 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined IP address 192.168.61.91 and MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.657404 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHPort
	I0904 07:01:24.657569 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHKeyPath
	I0904 07:01:24.657756 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHUsername
	I0904 07:01:24.657874 1161036 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/auto-644084/id_rsa Username:docker}
	I0904 07:01:24.672209 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0904 07:01:24.672906 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.673495 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.673518 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.673854 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.674025 1161036 main.go:141] libmachine: (auto-644084) Calling .GetState
	I0904 07:01:24.675938 1161036 main.go:141] libmachine: (auto-644084) Calling .DriverName
	I0904 07:01:24.676139 1161036 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 07:01:24.676155 1161036 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 07:01:24.676173 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHHostname
	I0904 07:01:24.679460 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.679933 1161036 main.go:141] libmachine: (auto-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b9:91", ip: ""} in network mk-auto-644084: {Iface:virbr3 ExpiryTime:2025-09-04 08:00:52 +0000 UTC Type:0 Mac:52:54:00:d7:b9:91 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:auto-644084 Clientid:01:52:54:00:d7:b9:91}
	I0904 07:01:24.679951 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined IP address 192.168.61.91 and MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.680204 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHPort
	I0904 07:01:24.680413 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHKeyPath
	I0904 07:01:24.680589 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHUsername
	I0904 07:01:24.680728 1161036 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/auto-644084/id_rsa Username:docker}
	I0904 07:01:24.852927 1161036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 07:01:24.910035 1161036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:01:25.093937 1161036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 07:01:25.114539 1161036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 07:01:25.664882 1161036 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0904 07:01:25.665023 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:25.665110 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:25.665478 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:25.665497 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:25.665507 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:25.665516 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:25.665792 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:25.665830 1161036 main.go:141] libmachine: (auto-644084) DBG | Closing plugin on server side
	I0904 07:01:25.665846 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:25.666207 1161036 node_ready.go:35] waiting up to 15m0s for node "auto-644084" to be "Ready" ...
	I0904 07:01:25.699220 1161036 node_ready.go:49] node "auto-644084" is "Ready"
	I0904 07:01:25.699255 1161036 node_ready.go:38] duration metric: took 33.015328ms for node "auto-644084" to be "Ready" ...
	I0904 07:01:25.699273 1161036 api_server.go:52] waiting for apiserver process to appear ...
	I0904 07:01:25.699333 1161036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 07:01:25.722243 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:25.722272 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:25.722547 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:25.722566 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:25.722582 1161036 main.go:141] libmachine: (auto-644084) DBG | Closing plugin on server side
	I0904 07:01:26.131289 1161036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.01669606s)
	I0904 07:01:26.131360 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:26.131377 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:26.131394 1161036 api_server.go:72] duration metric: took 1.521629414s to wait for apiserver process to appear ...
	I0904 07:01:26.131420 1161036 api_server.go:88] waiting for apiserver healthz status ...
	I0904 07:01:26.131443 1161036 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0904 07:01:26.131753 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:26.131772 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:26.131782 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:26.131790 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:26.132079 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:26.132100 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:26.133568 1161036 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0904 07:01:27.112230 1161732 start.go:364] duration metric: took 17.088294365s to acquireMachinesLock for "pause-017566"
	I0904 07:01:27.112296 1161732 start.go:96] Skipping create...Using existing machine configuration
	I0904 07:01:27.112305 1161732 fix.go:54] fixHost starting: 
	I0904 07:01:27.112765 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:27.112831 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:27.132201 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0904 07:01:27.132672 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:27.133209 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:27.133241 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:27.133709 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:27.133962 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:27.134143 1161732 main.go:141] libmachine: (pause-017566) Calling .GetState
	I0904 07:01:27.136167 1161732 fix.go:112] recreateIfNeeded on pause-017566: state=Running err=<nil>
	W0904 07:01:27.136193 1161732 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 07:01:25.228029 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.228525 1161522 main.go:141] libmachine: (kindnet-644084) found domain IP: 192.168.83.184
	I0904 07:01:25.228550 1161522 main.go:141] libmachine: (kindnet-644084) reserving static IP address...
	I0904 07:01:25.228562 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has current primary IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.229075 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find host DHCP lease matching {name: "kindnet-644084", mac: "52:54:00:f6:90:8a", ip: "192.168.83.184"} in network mk-kindnet-644084
	I0904 07:01:25.307695 1161522 main.go:141] libmachine: (kindnet-644084) reserved static IP address 192.168.83.184 for domain kindnet-644084
	I0904 07:01:25.307728 1161522 main.go:141] libmachine: (kindnet-644084) DBG | Getting to WaitForSSH function...
	I0904 07:01:25.307736 1161522 main.go:141] libmachine: (kindnet-644084) waiting for SSH...
	I0904 07:01:25.310704 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.311278 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.311325 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.311354 1161522 main.go:141] libmachine: (kindnet-644084) DBG | Using SSH client type: external
	I0904 07:01:25.311416 1161522 main.go:141] libmachine: (kindnet-644084) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa (-rw-------)
	I0904 07:01:25.311469 1161522 main.go:141] libmachine: (kindnet-644084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0904 07:01:25.311493 1161522 main.go:141] libmachine: (kindnet-644084) DBG | About to run SSH command:
	I0904 07:01:25.311507 1161522 main.go:141] libmachine: (kindnet-644084) DBG | exit 0
	I0904 07:01:25.447275 1161522 main.go:141] libmachine: (kindnet-644084) DBG | SSH cmd err, output: <nil>: 
	I0904 07:01:25.447611 1161522 main.go:141] libmachine: (kindnet-644084) KVM machine creation complete
	I0904 07:01:25.447970 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetConfigRaw
	I0904 07:01:25.448694 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:25.448949 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:25.449124 1161522 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0904 07:01:25.449142 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetState
	I0904 07:01:25.450505 1161522 main.go:141] libmachine: Detecting operating system of created instance...
	I0904 07:01:25.450522 1161522 main.go:141] libmachine: Waiting for SSH to be available...
	I0904 07:01:25.450529 1161522 main.go:141] libmachine: Getting to WaitForSSH function...
	I0904 07:01:25.450538 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.453931 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.454362 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.454390 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.454538 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.454745 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.454962 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.455138 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.455391 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.455708 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.455723 1161522 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0904 07:01:25.574620 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:01:25.574651 1161522 main.go:141] libmachine: Detecting the provisioner...
	I0904 07:01:25.574662 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.578426 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.578862 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.578895 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.579097 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.579324 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.579519 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.579700 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.579886 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.580192 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.580207 1161522 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0904 07:01:25.700679 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0904 07:01:25.700771 1161522 main.go:141] libmachine: found compatible host: buildroot
	I0904 07:01:25.700789 1161522 main.go:141] libmachine: Provisioning with buildroot...
	I0904 07:01:25.700803 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetMachineName
	I0904 07:01:25.701110 1161522 buildroot.go:166] provisioning hostname "kindnet-644084"
	I0904 07:01:25.701145 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetMachineName
	I0904 07:01:25.701360 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.704760 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.705200 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.705232 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.705422 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.705590 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.705702 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.705909 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.706130 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.706393 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.706407 1161522 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-644084 && echo "kindnet-644084" | sudo tee /etc/hostname
	I0904 07:01:25.851503 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-644084
	
	I0904 07:01:25.851543 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.855258 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.855671 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.855721 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.855907 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.856136 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.856302 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.856474 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.856635 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.856882 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.856900 1161522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-644084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-644084/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-644084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 07:01:25.982172 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:01:25.982274 1161522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1115845/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1115845/.minikube}
	I0904 07:01:25.982321 1161522 buildroot.go:174] setting up certificates
	I0904 07:01:25.982336 1161522 provision.go:84] configureAuth start
	I0904 07:01:25.982357 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetMachineName
	I0904 07:01:25.982721 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:25.985838 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.986277 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.986329 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.986494 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.989308 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.989654 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.989727 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.989989 1161522 provision.go:143] copyHostCerts
	I0904 07:01:25.990089 1161522 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem, removing ...
	I0904 07:01:25.990111 1161522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem
	I0904 07:01:25.990187 1161522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem (1082 bytes)
	I0904 07:01:25.990343 1161522 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem, removing ...
	I0904 07:01:25.990358 1161522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem
	I0904 07:01:25.990401 1161522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem (1123 bytes)
	I0904 07:01:25.990657 1161522 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem, removing ...
	I0904 07:01:25.990716 1161522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem
	I0904 07:01:25.991341 1161522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem (1679 bytes)
	I0904 07:01:25.991497 1161522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem org=jenkins.kindnet-644084 san=[127.0.0.1 192.168.83.184 kindnet-644084 localhost minikube]
	I0904 07:01:26.364318 1161522 provision.go:177] copyRemoteCerts
	I0904 07:01:26.364435 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 07:01:26.364479 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.367262 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.367608 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.367638 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.367812 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.368060 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.368256 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.368410 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:26.455226 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0904 07:01:26.490466 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 07:01:26.526556 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 07:01:26.562461 1161522 provision.go:87] duration metric: took 580.106076ms to configureAuth
	I0904 07:01:26.562502 1161522 buildroot.go:189] setting minikube options for container-runtime
	I0904 07:01:26.562712 1161522 config.go:182] Loaded profile config "kindnet-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:26.562810 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.566326 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.566743 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.566779 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.566940 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.567186 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.567342 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.567527 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.567748 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:26.568009 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:26.568030 1161522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 07:01:26.842175 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 07:01:26.842220 1161522 main.go:141] libmachine: Checking connection to Docker...
	I0904 07:01:26.842230 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetURL
	I0904 07:01:26.843536 1161522 main.go:141] libmachine: (kindnet-644084) DBG | using libvirt version 6000000
	I0904 07:01:26.845918 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.846295 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.846317 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.846546 1161522 main.go:141] libmachine: Docker is up and running!
	I0904 07:01:26.846562 1161522 main.go:141] libmachine: Reticulating splines...
	I0904 07:01:26.846572 1161522 client.go:171] duration metric: took 25.604002763s to LocalClient.Create
	I0904 07:01:26.846607 1161522 start.go:167] duration metric: took 25.604075218s to libmachine.API.Create "kindnet-644084"
	I0904 07:01:26.846622 1161522 start.go:293] postStartSetup for "kindnet-644084" (driver="kvm2")
	I0904 07:01:26.846636 1161522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 07:01:26.846662 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:26.846938 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 07:01:26.846967 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.849284 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.849629 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.849662 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.849789 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.849985 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.850156 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.850330 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:26.939848 1161522 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 07:01:26.944669 1161522 info.go:137] Remote host: Buildroot 2025.02
	I0904 07:01:26.944695 1161522 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/addons for local assets ...
	I0904 07:01:26.944758 1161522 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/files for local assets ...
	I0904 07:01:26.944832 1161522 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem -> 11200742.pem in /etc/ssl/certs
	I0904 07:01:26.944918 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 07:01:26.956359 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:26.986211 1161522 start.go:296] duration metric: took 139.572703ms for postStartSetup
	I0904 07:01:26.986260 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetConfigRaw
	I0904 07:01:26.986933 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:26.989754 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.990151 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.990197 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.990470 1161522 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/config.json ...
	I0904 07:01:26.990650 1161522 start.go:128] duration metric: took 25.769824603s to createHost
	I0904 07:01:26.990674 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.993010 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.993292 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.993322 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.993510 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.993714 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.993881 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.994008 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.994163 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:26.994406 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:26.994423 1161522 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 07:01:27.112031 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756969287.097530347
	
	I0904 07:01:27.112068 1161522 fix.go:216] guest clock: 1756969287.097530347
	I0904 07:01:27.112084 1161522 fix.go:229] Guest: 2025-09-04 07:01:27.097530347 +0000 UTC Remote: 2025-09-04 07:01:26.990662034 +0000 UTC m=+28.660247878 (delta=106.868313ms)
	I0904 07:01:27.112118 1161522 fix.go:200] guest clock delta is within tolerance: 106.868313ms
	I0904 07:01:27.112128 1161522 start.go:83] releasing machines lock for "kindnet-644084", held for 25.891490526s
	I0904 07:01:27.112165 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.112453 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:27.115601 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.116034 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:27.116065 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.116363 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.116925 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.117119 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.117283 1161522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 07:01:27.117340 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:27.117358 1161522 ssh_runner.go:195] Run: cat /version.json
	I0904 07:01:27.117384 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:27.120542 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.120739 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.121014 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:27.121041 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.121153 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:27.121183 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.121215 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:27.121384 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:27.121403 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:27.121546 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:27.121599 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:27.121688 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:27.121849 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:27.121857 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:27.203864 1161522 ssh_runner.go:195] Run: systemctl --version
	I0904 07:01:27.243120 1161522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 07:01:27.405357 1161522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 07:01:27.413608 1161522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 07:01:27.413672 1161522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:01:27.436031 1161522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 07:01:27.436060 1161522 start.go:495] detecting cgroup driver to use...
	I0904 07:01:27.436135 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 07:01:27.457457 1161522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 07:01:27.475563 1161522 docker.go:218] disabling cri-docker service (if available) ...
	I0904 07:01:27.475657 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 07:01:27.497358 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 07:01:27.515372 1161522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 07:01:27.689461 1161522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 07:01:27.848740 1161522 docker.go:234] disabling docker service ...
	I0904 07:01:27.848817 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 07:01:27.865300 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 07:01:27.879500 1161522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 07:01:28.097419 1161522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 07:01:28.245948 1161522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 07:01:28.262314 1161522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 07:01:28.285064 1161522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 07:01:28.285155 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.297810 1161522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 07:01:28.297898 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.311022 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.323750 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.336904 1161522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 07:01:28.349729 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.360806 1161522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.379317 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.390715 1161522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 07:01:28.400095 1161522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 07:01:28.400167 1161522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 07:01:28.417964 1161522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 07:01:28.428649 1161522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:28.574072 1161522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 07:01:28.680048 1161522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 07:01:28.680129 1161522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 07:01:28.684968 1161522 start.go:563] Will wait 60s for crictl version
	I0904 07:01:28.685019 1161522 ssh_runner.go:195] Run: which crictl
	I0904 07:01:28.688871 1161522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 07:01:28.726950 1161522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 07:01:28.727024 1161522 ssh_runner.go:195] Run: crio --version
	I0904 07:01:28.755077 1161522 ssh_runner.go:195] Run: crio --version
	I0904 07:01:28.783931 1161522 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 07:01:27.138268 1161732 out.go:252] * Updating the running kvm2 "pause-017566" VM ...
	I0904 07:01:27.138298 1161732 machine.go:93] provisionDockerMachine start ...
	I0904 07:01:27.138313 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:27.138518 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.141213 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.141742 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.141767 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.142052 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.142211 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.142329 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.142435 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.142642 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.142939 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.142951 1161732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 07:01:27.264475 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-017566
	
	I0904 07:01:27.264509 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.264829 1161732 buildroot.go:166] provisioning hostname "pause-017566"
	I0904 07:01:27.264868 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.265100 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.268258 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.268727 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.268755 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.268949 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.269134 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.269298 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.269460 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.269625 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.269851 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.269866 1161732 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-017566 && echo "pause-017566" | sudo tee /etc/hostname
	I0904 07:01:27.402385 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-017566
	
	I0904 07:01:27.402422 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.406417 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.406873 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.406898 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.407170 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.407411 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.407590 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.407783 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.408014 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.408402 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.408442 1161732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-017566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-017566/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-017566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 07:01:27.536058 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:01:27.536105 1161732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1115845/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1115845/.minikube}
	I0904 07:01:27.536138 1161732 buildroot.go:174] setting up certificates
	I0904 07:01:27.536156 1161732 provision.go:84] configureAuth start
	I0904 07:01:27.536176 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.536479 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:27.539375 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.539785 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.539812 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.540030 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.542344 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.542629 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.542667 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.542888 1161732 provision.go:143] copyHostCerts
	I0904 07:01:27.542988 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem, removing ...
	I0904 07:01:27.543011 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem
	I0904 07:01:27.543079 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem (1679 bytes)
	I0904 07:01:27.543198 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem, removing ...
	I0904 07:01:27.543210 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem
	I0904 07:01:27.543244 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem (1082 bytes)
	I0904 07:01:27.543319 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem, removing ...
	I0904 07:01:27.543330 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem
	I0904 07:01:27.543357 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem (1123 bytes)
	I0904 07:01:27.543418 1161732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem org=jenkins.pause-017566 san=[127.0.0.1 192.168.39.168 localhost minikube pause-017566]
	I0904 07:01:27.703670 1161732 provision.go:177] copyRemoteCerts
	I0904 07:01:27.703728 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 07:01:27.703755 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.706487 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.706849 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.706884 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.707049 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.707243 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.707437 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.707651 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:27.798776 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 07:01:27.833553 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0904 07:01:27.865385 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 07:01:27.894589 1161732 provision.go:87] duration metric: took 358.411244ms to configureAuth
	I0904 07:01:27.894626 1161732 buildroot.go:189] setting minikube options for container-runtime
	I0904 07:01:27.894995 1161732 config.go:182] Loaded profile config "pause-017566": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:27.895097 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.898221 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.898667 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.898715 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.898935 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.899156 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.899364 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.899545 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.899735 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.899945 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.899959 1161732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 07:01:26.134641 1161036 addons.go:514] duration metric: took 1.524876476s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0904 07:01:26.152041 1161036 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0904 07:01:26.153203 1161036 api_server.go:141] control plane version: v1.34.0
	I0904 07:01:26.153239 1161036 api_server.go:131] duration metric: took 21.810181ms to wait for apiserver health ...
	I0904 07:01:26.153253 1161036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 07:01:26.160679 1161036 system_pods.go:59] 8 kube-system pods found
	I0904 07:01:26.160717 1161036 system_pods.go:61] "coredns-66bc5c9577-lq225" [060ef169-5f90-41ea-92b3-1bdfc4cdb068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.160727 1161036 system_pods.go:61] "coredns-66bc5c9577-qrmhs" [615315ae-405b-4992-841c-24f070bdb631] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.160735 1161036 system_pods.go:61] "etcd-auto-644084" [6220c93f-264a-4011-be10-58b5f20081b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 07:01:26.160741 1161036 system_pods.go:61] "kube-apiserver-auto-644084" [092f42d8-0f0c-4a20-aced-13411a94e4fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 07:01:26.160745 1161036 system_pods.go:61] "kube-controller-manager-auto-644084" [b14e2b6f-66d6-4801-8d33-6e2f9e762a76] Running
	I0904 07:01:26.160751 1161036 system_pods.go:61] "kube-proxy-fqgp9" [4ab5cafd-94f5-4b23-8026-8208fb8ce408] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 07:01:26.160756 1161036 system_pods.go:61] "kube-scheduler-auto-644084" [b660931a-535c-470b-a314-68e4955c9af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 07:01:26.160759 1161036 system_pods.go:61] "storage-provisioner" [2800a949-a2c3-4230-8f32-064780c523fb] Pending
	I0904 07:01:26.160766 1161036 system_pods.go:74] duration metric: took 7.506067ms to wait for pod list to return data ...
	I0904 07:01:26.160775 1161036 default_sa.go:34] waiting for default service account to be created ...
	I0904 07:01:26.166881 1161036 default_sa.go:45] found service account: "default"
	I0904 07:01:26.166909 1161036 default_sa.go:55] duration metric: took 6.127726ms for default service account to be created ...
	I0904 07:01:26.166929 1161036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 07:01:26.170675 1161036 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-644084" context rescaled to 1 replicas
	I0904 07:01:26.174853 1161036 system_pods.go:86] 8 kube-system pods found
	I0904 07:01:26.174887 1161036 system_pods.go:89] "coredns-66bc5c9577-lq225" [060ef169-5f90-41ea-92b3-1bdfc4cdb068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.174896 1161036 system_pods.go:89] "coredns-66bc5c9577-qrmhs" [615315ae-405b-4992-841c-24f070bdb631] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.174907 1161036 system_pods.go:89] "etcd-auto-644084" [6220c93f-264a-4011-be10-58b5f20081b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 07:01:26.174921 1161036 system_pods.go:89] "kube-apiserver-auto-644084" [092f42d8-0f0c-4a20-aced-13411a94e4fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 07:01:26.174928 1161036 system_pods.go:89] "kube-controller-manager-auto-644084" [b14e2b6f-66d6-4801-8d33-6e2f9e762a76] Running
	I0904 07:01:26.174937 1161036 system_pods.go:89] "kube-proxy-fqgp9" [4ab5cafd-94f5-4b23-8026-8208fb8ce408] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 07:01:26.174950 1161036 system_pods.go:89] "kube-scheduler-auto-644084" [b660931a-535c-470b-a314-68e4955c9af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 07:01:26.174967 1161036 system_pods.go:89] "storage-provisioner" [2800a949-a2c3-4230-8f32-064780c523fb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 07:01:26.175002 1161036 retry.go:31] will retry after 251.951297ms: missing components: kube-dns, kube-proxy
	I0904 07:01:26.430955 1161036 system_pods.go:86] 8 kube-system pods found
	I0904 07:01:26.430994 1161036 system_pods.go:89] "coredns-66bc5c9577-lq225" [060ef169-5f90-41ea-92b3-1bdfc4cdb068] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.431013 1161036 system_pods.go:89] "coredns-66bc5c9577-qrmhs" [615315ae-405b-4992-841c-24f070bdb631] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.431024 1161036 system_pods.go:89] "etcd-auto-644084" [6220c93f-264a-4011-be10-58b5f20081b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 07:01:26.431030 1161036 system_pods.go:89] "kube-apiserver-auto-644084" [092f42d8-0f0c-4a20-aced-13411a94e4fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 07:01:26.431036 1161036 system_pods.go:89] "kube-controller-manager-auto-644084" [b14e2b6f-66d6-4801-8d33-6e2f9e762a76] Running
	I0904 07:01:26.431041 1161036 system_pods.go:89] "kube-proxy-fqgp9" [4ab5cafd-94f5-4b23-8026-8208fb8ce408] Running
	I0904 07:01:26.431048 1161036 system_pods.go:89] "kube-scheduler-auto-644084" [b660931a-535c-470b-a314-68e4955c9af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 07:01:26.431055 1161036 system_pods.go:89] "storage-provisioner" [2800a949-a2c3-4230-8f32-064780c523fb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 07:01:26.431069 1161036 system_pods.go:126] duration metric: took 264.132562ms to wait for k8s-apps to be running ...
	I0904 07:01:26.431085 1161036 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 07:01:26.431154 1161036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 07:01:26.451788 1161036 system_svc.go:56] duration metric: took 20.689252ms WaitForService to wait for kubelet
	I0904 07:01:26.451835 1161036 kubeadm.go:578] duration metric: took 1.842072357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 07:01:26.451863 1161036 node_conditions.go:102] verifying NodePressure condition ...
	I0904 07:01:26.461327 1161036 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 07:01:26.461377 1161036 node_conditions.go:123] node cpu capacity is 2
	I0904 07:01:26.461396 1161036 node_conditions.go:105] duration metric: took 9.526558ms to run NodePressure ...
	I0904 07:01:26.461414 1161036 start.go:241] waiting for startup goroutines ...
	I0904 07:01:26.461432 1161036 start.go:246] waiting for cluster config update ...
	I0904 07:01:26.461449 1161036 start.go:255] writing updated cluster config ...
	I0904 07:01:26.461794 1161036 ssh_runner.go:195] Run: rm -f paused
	I0904 07:01:26.471380 1161036 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 07:01:26.476749 1161036 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lq225" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 07:01:28.482448 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	I0904 07:01:28.785014 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:28.787980 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:28.788404 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:28.788434 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:28.788705 1161522 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0904 07:01:28.792973 1161522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 07:01:28.807240 1161522 kubeadm.go:875] updating cluster {Name:kindnet-644084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:kindnet-644084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.184 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 07:01:28.807354 1161522 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:28.807400 1161522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:28.840993 1161522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0904 07:01:28.841071 1161522 ssh_runner.go:195] Run: which lz4
	I0904 07:01:28.845181 1161522 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 07:01:28.849589 1161522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 07:01:28.849627 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0904 07:01:30.209516 1161522 crio.go:462] duration metric: took 1.36436722s to copy over tarball
	I0904 07:01:30.209594 1161522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 07:01:31.972342 1161522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.762712301s)
	I0904 07:01:31.972381 1161522 crio.go:469] duration metric: took 1.762831752s to extract the tarball
	I0904 07:01:31.972405 1161522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 07:01:32.016595 1161522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:32.060481 1161522 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:32.060508 1161522 cache_images.go:85] Images are preloaded, skipping loading
	I0904 07:01:32.060518 1161522 kubeadm.go:926] updating node { 192.168.83.184 8443 v1.34.0 crio true true} ...
	I0904 07:01:32.060687 1161522 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-644084 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-644084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0904 07:01:32.060797 1161522 ssh_runner.go:195] Run: crio config
	I0904 07:01:32.108719 1161522 cni.go:84] Creating CNI manager for "kindnet"
	I0904 07:01:32.108815 1161522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 07:01:32.108857 1161522 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.184 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-644084 NodeName:kindnet-644084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 07:01:32.109098 1161522 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-644084"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.184"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.184"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 07:01:32.109217 1161522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 07:01:32.121186 1161522 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 07:01:32.121288 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 07:01:32.132241 1161522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0904 07:01:32.152136 1161522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 07:01:32.173828 1161522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0904 07:01:32.192239 1161522 ssh_runner.go:195] Run: grep 192.168.83.184	control-plane.minikube.internal$ /etc/hosts
	I0904 07:01:32.195961 1161522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 07:01:32.210154 1161522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:32.355287 1161522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:01:32.390149 1161522 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084 for IP: 192.168.83.184
	I0904 07:01:32.390190 1161522 certs.go:194] generating shared ca certs ...
	I0904 07:01:32.390217 1161522 certs.go:226] acquiring lock for ca certs: {Name:mkb48abb711128619cd278e65e40c326a6b20d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.390458 1161522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key
	I0904 07:01:32.390524 1161522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key
	I0904 07:01:32.390542 1161522 certs.go:256] generating profile certs ...
	I0904 07:01:32.390616 1161522 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.key
	I0904 07:01:32.390640 1161522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt with IP's: []
	I0904 07:01:32.498401 1161522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt ...
	I0904 07:01:32.498433 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: {Name:mk8af1151167c6e0451312073e46d6b07e92c708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.498603 1161522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.key ...
	I0904 07:01:32.498613 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.key: {Name:mk995c07d994cb142636879d119a9beafc08719c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.498698 1161522 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7
	I0904 07:01:32.498714 1161522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.184]
	I0904 07:01:32.623726 1161522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7 ...
	I0904 07:01:32.623759 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7: {Name:mkcc545f92daa830e262441c44ee9cb94ed51df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.623923 1161522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7 ...
	I0904 07:01:32.623938 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7: {Name:mkceeeeeb72bf43d2a7b5cbec52c04225f142b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.624014 1161522 certs.go:381] copying /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7 -> /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt
	I0904 07:01:32.624086 1161522 certs.go:385] copying /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7 -> /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key
	I0904 07:01:32.624138 1161522 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key
	I0904 07:01:32.624158 1161522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt with IP's: []
	I0904 07:01:32.811341 1161522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt ...
	I0904 07:01:32.811375 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt: {Name:mk73c63e2a6016f2fab5cda0d37845d338b66f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.811537 1161522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key ...
	I0904 07:01:32.811554 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key: {Name:mk64324bb782cd5fc411a021c50384d975d8d985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.811757 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem (1338 bytes)
	W0904 07:01:32.811798 1161522 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074_empty.pem, impossibly tiny 0 bytes
	I0904 07:01:32.811808 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 07:01:32.811828 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem (1082 bytes)
	I0904 07:01:32.811851 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem (1123 bytes)
	I0904 07:01:32.811872 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem (1679 bytes)
	I0904 07:01:32.811907 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:32.812499 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 07:01:32.841059 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 07:01:32.867610 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 07:01:32.896130 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 07:01:32.922646 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 07:01:32.949599 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 07:01:32.975535 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 07:01:33.002769 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 07:01:33.028991 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem --> /usr/share/ca-certificates/1120074.pem (1338 bytes)
	I0904 07:01:33.057026 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /usr/share/ca-certificates/11200742.pem (1708 bytes)
	I0904 07:01:33.090886 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 07:01:33.135248 1161522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 07:01:33.159742 1161522 ssh_runner.go:195] Run: openssl version
	I0904 07:01:33.167417 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120074.pem && ln -fs /usr/share/ca-certificates/1120074.pem /etc/ssl/certs/1120074.pem"
	I0904 07:01:33.184381 1161522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120074.pem
	I0904 07:01:33.191187 1161522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:04 /usr/share/ca-certificates/1120074.pem
	I0904 07:01:33.191267 1161522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120074.pem
	I0904 07:01:33.199575 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120074.pem /etc/ssl/certs/51391683.0"
	I0904 07:01:33.212086 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11200742.pem && ln -fs /usr/share/ca-certificates/11200742.pem /etc/ssl/certs/11200742.pem"
	I0904 07:01:33.225087 1161522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11200742.pem
	I0904 07:01:33.230385 1161522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:04 /usr/share/ca-certificates/11200742.pem
	I0904 07:01:33.230444 1161522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11200742.pem
	I0904 07:01:33.237382 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11200742.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 07:01:33.250165 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 07:01:33.268084 1161522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:33.274327 1161522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 05:54 /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:33.274407 1161522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:33.283423 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 07:01:33.300612 1161522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 07:01:33.305295 1161522 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 07:01:33.305369 1161522 kubeadm.go:392] StartCluster: {Name:kindnet-644084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:kindnet-644084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.184 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:33.305481 1161522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 07:01:33.305544 1161522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 07:01:33.347155 1161522 cri.go:89] found id: ""
	I0904 07:01:33.347227 1161522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 07:01:33.359854 1161522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 07:01:33.371988 1161522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 07:01:33.384104 1161522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 07:01:33.384129 1161522 kubeadm.go:157] found existing configuration files:
	
	I0904 07:01:33.384183 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 07:01:33.395069 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 07:01:33.395142 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 07:01:33.407195 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 07:01:33.418297 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 07:01:33.418386 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 07:01:33.431714 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 07:01:33.446566 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 07:01:33.446621 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 07:01:33.459119 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 07:01:33.470247 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 07:01:33.470363 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 07:01:33.485249 1161522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 07:01:33.544936 1161522 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 07:01:33.545010 1161522 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 07:01:33.654436 1161522 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 07:01:33.654595 1161522 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 07:01:33.654762 1161522 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 07:01:33.665066 1161522 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 07:01:33.515165 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 07:01:33.515196 1161732 machine.go:96] duration metric: took 6.376888505s to provisionDockerMachine
	I0904 07:01:33.515212 1161732 start.go:293] postStartSetup for "pause-017566" (driver="kvm2")
	I0904 07:01:33.515226 1161732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 07:01:33.515249 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.515626 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 07:01:33.515661 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.519114 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.519592 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.519624 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.519795 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.519977 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.520206 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.520390 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.610679 1161732 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 07:01:33.616704 1161732 info.go:137] Remote host: Buildroot 2025.02
	I0904 07:01:33.616739 1161732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/addons for local assets ...
	I0904 07:01:33.616814 1161732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/files for local assets ...
	I0904 07:01:33.616905 1161732 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem -> 11200742.pem in /etc/ssl/certs
	I0904 07:01:33.617040 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 07:01:33.631551 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:33.665307 1161732 start.go:296] duration metric: took 150.079866ms for postStartSetup
	I0904 07:01:33.665355 1161732 fix.go:56] duration metric: took 6.553050716s for fixHost
	I0904 07:01:33.665388 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.669609 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.670031 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.670076 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.670271 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.670479 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.670680 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.670879 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.671044 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:33.671293 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:33.671311 1161732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 07:01:33.787999 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756969293.783889209
	
	I0904 07:01:33.788029 1161732 fix.go:216] guest clock: 1756969293.783889209
	I0904 07:01:33.788040 1161732 fix.go:229] Guest: 2025-09-04 07:01:33.783889209 +0000 UTC Remote: 2025-09-04 07:01:33.665366067 +0000 UTC m=+23.813013966 (delta=118.523142ms)
	I0904 07:01:33.788068 1161732 fix.go:200] guest clock delta is within tolerance: 118.523142ms
	I0904 07:01:33.788076 1161732 start.go:83] releasing machines lock for "pause-017566", held for 6.675805339s
	I0904 07:01:33.788102 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.788408 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:33.791521 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.791914 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.791977 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.792095 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792611 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792808 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792932 1161732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 07:01:33.792992 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.793044 1161732 ssh_runner.go:195] Run: cat /version.json
	I0904 07:01:33.793087 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.795985 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796378 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.796407 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796428 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796674 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.796854 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.796939 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.796976 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.797029 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.797123 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.797170 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.797245 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.797390 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.797564 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.916461 1161732 ssh_runner.go:195] Run: systemctl --version
	I0904 07:01:33.922526 1161732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 07:01:34.076454 1161732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 07:01:34.087525 1161732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 07:01:34.087620 1161732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:01:34.098978 1161732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 07:01:34.099005 1161732 start.go:495] detecting cgroup driver to use...
	I0904 07:01:34.099086 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 07:01:34.120306 1161732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 07:01:34.137553 1161732 docker.go:218] disabling cri-docker service (if available) ...
	I0904 07:01:34.137664 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 07:01:34.154114 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 07:01:34.169285 1161732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 07:01:34.345407 1161732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 07:01:34.520424 1161732 docker.go:234] disabling docker service ...
	I0904 07:01:34.520502 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 07:01:34.550550 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 07:01:34.565558 1161732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 07:01:34.746021 1161732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	W0904 07:01:30.483336 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	W0904 07:01:32.982861 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	I0904 07:01:33.667105 1161522 out.go:252]   - Generating certificates and keys ...
	I0904 07:01:33.667204 1161522 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 07:01:33.667291 1161522 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 07:01:34.195279 1161522 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 07:01:34.339684 1161522 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 07:01:34.516257 1161522 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 07:01:34.542907 1161522 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 07:01:34.820712 1161522 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 07:01:34.821029 1161522 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-644084 localhost] and IPs [192.168.83.184 127.0.0.1 ::1]
	I0904 07:01:34.936222 1161522 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 07:01:34.936602 1161522 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-644084 localhost] and IPs [192.168.83.184 127.0.0.1 ::1]
	I0904 07:01:35.189688 1161522 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 07:01:35.868146 1161522 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 07:01:35.933872 1161522 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 07:01:35.934204 1161522 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 07:01:36.288723 1161522 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 07:01:36.560107 1161522 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 07:01:36.660548 1161522 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 07:01:37.046981 1161522 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 07:01:37.442478 1161522 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 07:01:37.442906 1161522 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 07:01:37.446364 1161522 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 07:01:37.448049 1161522 out.go:252]   - Booting up control plane ...
	I0904 07:01:37.448193 1161522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 07:01:37.448299 1161522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 07:01:37.448394 1161522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 07:01:37.473928 1161522 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 07:01:37.474061 1161522 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 07:01:37.481715 1161522 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 07:01:37.482086 1161522 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 07:01:37.482267 1161522 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 07:01:37.662999 1161522 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 07:01:37.663220 1161522 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 07:01:34.918646 1161732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 07:01:34.936473 1161732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 07:01:34.964184 1161732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 07:01:34.964265 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:34.976814 1161732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 07:01:34.976888 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:34.989396 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.002104 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.014978 1161732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 07:01:35.027454 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.044316 1161732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.058383 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.070619 1161732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 07:01:35.081214 1161732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 07:01:35.096031 1161732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:35.271583 1161732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 07:01:39.545989 1161732 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.274359891s)
	I0904 07:01:39.546026 1161732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 07:01:39.546098 1161732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 07:01:39.551592 1161732 start.go:563] Will wait 60s for crictl version
	I0904 07:01:39.551658 1161732 ssh_runner.go:195] Run: which crictl
	I0904 07:01:39.555911 1161732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 07:01:39.593817 1161732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 07:01:39.593911 1161732 ssh_runner.go:195] Run: crio --version
	I0904 07:01:39.623039 1161732 ssh_runner.go:195] Run: crio --version
	I0904 07:01:39.661659 1161732 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 07:01:39.662705 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:39.666104 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:39.666530 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:39.666563 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:39.666943 1161732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 07:01:39.672719 1161732 kubeadm.go:875] updating cluster {Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 07:01:39.672897 1161732 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:39.672947 1161732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:39.714651 1161732 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:39.714676 1161732 crio.go:433] Images already preloaded, skipping extraction
	I0904 07:01:39.714749 1161732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:39.751978 1161732 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:39.752004 1161732 cache_images.go:85] Images are preloaded, skipping loading
	I0904 07:01:39.752012 1161732 kubeadm.go:926] updating node { 192.168.39.168 8443 v1.34.0 crio true true} ...
	I0904 07:01:39.752114 1161732 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-017566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 07:01:39.752179 1161732 ssh_runner.go:195] Run: crio config
	I0904 07:01:39.795416 1161732 cni.go:84] Creating CNI manager for ""
	I0904 07:01:39.795443 1161732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:39.795458 1161732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 07:01:39.795500 1161732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-017566 NodeName:pause-017566 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 07:01:39.795668 1161732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-017566"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.168"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 07:01:39.795740 1161732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 07:01:39.807142 1161732 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 07:01:39.807227 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 07:01:39.818028 1161732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0904 07:01:39.841592 1161732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 07:01:39.863014 1161732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0904 07:01:39.882663 1161732 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0904 07:01:39.886632 1161732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0904 07:01:35.217339 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	W0904 07:01:37.482464 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	W0904 07:01:39.484678 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	I0904 07:01:39.662705 1161522 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001321555s
	I0904 07:01:39.666939 1161522 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 07:01:39.667063 1161522 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.83.184:8443/livez
	I0904 07:01:39.667180 1161522 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 07:01:39.667284 1161522 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 07:01:42.224101 1161522 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.557434475s
	I0904 07:01:43.264683 1161522 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.597296057s
	I0904 07:01:40.059102 1161732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:01:40.075459 1161732 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566 for IP: 192.168.39.168
	I0904 07:01:40.075502 1161732 certs.go:194] generating shared ca certs ...
	I0904 07:01:40.075538 1161732 certs.go:226] acquiring lock for ca certs: {Name:mkb48abb711128619cd278e65e40c326a6b20d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:40.075768 1161732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key
	I0904 07:01:40.075842 1161732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key
	I0904 07:01:40.075862 1161732 certs.go:256] generating profile certs ...
	I0904 07:01:40.075981 1161732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/client.key
	I0904 07:01:40.076067 1161732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.key.46bf764b
	I0904 07:01:40.076144 1161732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.key
	I0904 07:01:40.076287 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem (1338 bytes)
	W0904 07:01:40.076327 1161732 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074_empty.pem, impossibly tiny 0 bytes
	I0904 07:01:40.076340 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 07:01:40.076373 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem (1082 bytes)
	I0904 07:01:40.076404 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem (1123 bytes)
	I0904 07:01:40.076436 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem (1679 bytes)
	I0904 07:01:40.076497 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:40.077172 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 07:01:40.108154 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 07:01:40.136983 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 07:01:40.167004 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 07:01:40.199411 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 07:01:40.229354 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 07:01:40.263364 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 07:01:40.294718 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 07:01:40.329466 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /usr/share/ca-certificates/11200742.pem (1708 bytes)
	I0904 07:01:40.363576 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 07:01:40.396318 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem --> /usr/share/ca-certificates/1120074.pem (1338 bytes)
	I0904 07:01:40.430931 1161732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 07:01:40.452998 1161732 ssh_runner.go:195] Run: openssl version
	I0904 07:01:40.461063 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11200742.pem && ln -fs /usr/share/ca-certificates/11200742.pem /etc/ssl/certs/11200742.pem"
	I0904 07:01:40.477331 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.492886 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:04 /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.493057 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.508368 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11200742.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 07:01:40.573215 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 07:01:40.592349 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.603505 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 05:54 /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.603580 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.621795 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 07:01:40.656205 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120074.pem && ln -fs /usr/share/ca-certificates/1120074.pem /etc/ssl/certs/1120074.pem"
	I0904 07:01:40.689203 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.700628 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:04 /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.700733 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.718305 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120074.pem /etc/ssl/certs/51391683.0"
	I0904 07:01:40.748388 1161732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 07:01:40.764024 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 07:01:40.790149 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 07:01:40.806535 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 07:01:40.822778 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 07:01:40.836036 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 07:01:40.848094 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 07:01:40.861650 1161732 kubeadm.go:392] StartCluster: {Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:40.861903 1161732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 07:01:40.862007 1161732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 07:01:40.952656 1161732 cri.go:89] found id: "7bd228eee0c8478996d5e834f0e01320ec10565c851fb545d08f599c036f664e"
	I0904 07:01:40.952687 1161732 cri.go:89] found id: "bb4a7e0352be4102c6ffc78172d580c052dba2d2803d939ac1ad23e45e8677ca"
	I0904 07:01:40.952692 1161732 cri.go:89] found id: "0b029332740d46dc6f0939ada2079b4939254cb16a68486524aa04a27a2b6bcf"
	I0904 07:01:40.952697 1161732 cri.go:89] found id: "b880e684a6e0d5818a2df4915f902ea1940a2b8fab778c808806680aa4d82037"
	I0904 07:01:40.952702 1161732 cri.go:89] found id: "143324528cf349785e87b806fa537a8990761956d653c2efad7cbd0eba68feb9"
	I0904 07:01:40.952707 1161732 cri.go:89] found id: "6f3f77c12db6e0e60d13e8d3c64818d2d235cc405b125f184aa5dc00f939cd6a"
	I0904 07:01:40.952711 1161732 cri.go:89] found id: ""
	I0904 07:01:40.952765 1161732 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-017566 -n pause-017566
helpers_test.go:269: (dbg) Run:  kubectl --context pause-017566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-017566 -n pause-017566
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-017566 logs -n 25
E0904 07:02:04.417763 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-017566 logs -n 25: (1.398412213s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-324880 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:57 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-177439 │ jenkins │ v1.36.0 │ 04 Sep 25 06:57 UTC │                     │
	│ start   │ -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-177439 │ jenkins │ v1.36.0 │ 04 Sep 25 06:57 UTC │ 04 Sep 25 07:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-798275 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-798275    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ delete  │ -p stopped-upgrade-798275                                                                                                                                                                                               │ stopped-upgrade-798275    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p cert-expiration-986529 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-986529    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:59 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-050549 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-050549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ delete  │ -p running-upgrade-050549                                                                                                                                                                                               │ running-upgrade-050549    │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p force-systemd-flag-969000 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-969000 │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:59 UTC │
	│ ssh     │ -p NoKubernetes-324880 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ stop    │ -p NoKubernetes-324880                                                                                                                                                                                                  │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │ 04 Sep 25 06:58 UTC │
	│ start   │ -p NoKubernetes-324880 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:58 UTC │                     │
	│ delete  │ -p NoKubernetes-324880                                                                                                                                                                                                  │ NoKubernetes-324880       │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 06:59 UTC │
	│ start   │ -p pause-017566 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-017566              │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 07:01 UTC │
	│ ssh     │ force-systemd-flag-969000 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-969000 │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 06:59 UTC │
	│ delete  │ -p force-systemd-flag-969000                                                                                                                                                                                            │ force-systemd-flag-969000 │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 06:59 UTC │
	│ start   │ -p cert-options-153188 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 06:59 UTC │ 04 Sep 25 07:00 UTC │
	│ delete  │ -p kubernetes-upgrade-177439                                                                                                                                                                                            │ kubernetes-upgrade-177439 │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ start   │ -p auto-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-644084               │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:01 UTC │
	│ ssh     │ cert-options-153188 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ ssh     │ -p cert-options-153188 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ delete  │ -p cert-options-153188                                                                                                                                                                                                  │ cert-options-153188       │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │ 04 Sep 25 07:00 UTC │
	│ start   │ -p kindnet-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-644084            │ jenkins │ v1.36.0 │ 04 Sep 25 07:00 UTC │                     │
	│ start   │ -p pause-017566 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-017566              │ jenkins │ v1.36.0 │ 04 Sep 25 07:01 UTC │ 04 Sep 25 07:02 UTC │
	│ ssh     │ -p auto-644084 pgrep -a kubelet                                                                                                                                                                                         │ auto-644084               │ jenkins │ v1.36.0 │ 04 Sep 25 07:01 UTC │ 04 Sep 25 07:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 07:01:09
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 07:01:09.902408 1161732 out.go:360] Setting OutFile to fd 1 ...
	I0904 07:01:09.903196 1161732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:01:09.903260 1161732 out.go:374] Setting ErrFile to fd 2...
	I0904 07:01:09.903277 1161732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 07:01:09.903760 1161732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 07:01:09.904952 1161732 out.go:368] Setting JSON to false
	I0904 07:01:09.906065 1161732 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":17013,"bootTime":1756952257,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 07:01:09.906178 1161732 start.go:140] virtualization: kvm guest
	I0904 07:01:09.907805 1161732 out.go:179] * [pause-017566] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 07:01:09.908976 1161732 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 07:01:09.909010 1161732 notify.go:220] Checking for updates...
	I0904 07:01:09.910997 1161732 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 07:01:09.912148 1161732 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 07:01:09.913170 1161732 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 07:01:09.914073 1161732 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 07:01:09.915035 1161732 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 07:01:09.916362 1161732 config.go:182] Loaded profile config "pause-017566": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:09.916829 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:09.916879 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:09.934800 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0904 07:01:09.935395 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:09.935984 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:09.936016 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:09.936409 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:09.936644 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:09.936927 1161732 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 07:01:09.937246 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:09.937293 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:09.952626 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0904 07:01:09.953214 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:09.953784 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:09.953816 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:09.954335 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:09.954553 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:09.993954 1161732 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 07:01:09.995109 1161732 start.go:304] selected driver: kvm2
	I0904 07:01:09.995132 1161732 start.go:918] validating driver "kvm2" against &{Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:09.995321 1161732 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 07:01:09.995816 1161732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:01:09.995920 1161732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1115845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 07:01:10.019030 1161732 install.go:137] /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 07:01:10.020243 1161732 cni.go:84] Creating CNI manager for ""
	I0904 07:01:10.020329 1161732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:10.020416 1161732 start.go:348] cluster config:
	{Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:10.020664 1161732 iso.go:125] acquiring lock: {Name:mk8046b526ef8e07e7f8bc343ab464442f664799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 07:01:10.022309 1161732 out.go:179] * Starting "pause-017566" primary control-plane node in "pause-017566" cluster
	I0904 07:01:08.460360 1161036 out.go:252]   - Generating certificates and keys ...
	I0904 07:01:08.460553 1161036 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 07:01:08.460651 1161036 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 07:01:08.538889 1161036 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 07:01:08.809600 1161036 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 07:01:09.114655 1161036 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 07:01:09.744611 1161036 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 07:01:10.137279 1161036 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 07:01:10.137551 1161036 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-644084 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0904 07:01:10.197031 1161036 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 07:01:10.197229 1161036 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-644084 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0904 07:01:10.306155 1161036 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 07:01:10.365532 1161036 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 07:01:10.570379 1161036 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 07:01:10.570496 1161036 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 07:01:10.621046 1161036 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 07:01:11.024853 1161036 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 07:01:11.448309 1161036 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 07:01:11.496168 1161036 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 07:01:11.620120 1161036 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 07:01:11.620869 1161036 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 07:01:11.623044 1161036 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 07:01:09.416227 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:09.416748 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:09.416801 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:09.416733 1161573 retry.go:31] will retry after 2.148885028s: waiting for domain to come up
	I0904 07:01:11.567679 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:11.568392 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:11.568439 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:11.568313 1161573 retry.go:31] will retry after 1.910963226s: waiting for domain to come up
	I0904 07:01:10.023281 1161732 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:10.023331 1161732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 07:01:10.023355 1161732 cache.go:58] Caching tarball of preloaded images
	I0904 07:01:10.023454 1161732 preload.go:172] Found /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0904 07:01:10.023469 1161732 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0904 07:01:10.023627 1161732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/config.json ...
	I0904 07:01:10.023895 1161732 start.go:360] acquireMachinesLock for pause-017566: {Name:mk3d0e482c06d0ca53afa1318fbdd30ffc2f15b4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 07:01:11.624627 1161036 out.go:252]   - Booting up control plane ...
	I0904 07:01:11.624773 1161036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 07:01:11.624899 1161036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 07:01:11.625030 1161036 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 07:01:11.653918 1161036 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 07:01:11.654124 1161036 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 07:01:11.666350 1161036 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 07:01:11.668822 1161036 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 07:01:11.668917 1161036 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 07:01:11.866416 1161036 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 07:01:11.866608 1161036 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 07:01:12.866545 1161036 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001211929s
	I0904 07:01:12.869311 1161036 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 07:01:12.869486 1161036 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.61.91:8443/livez
	I0904 07:01:12.869626 1161036 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 07:01:12.869755 1161036 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 07:01:13.481370 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:13.482131 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:13.482165 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:13.482080 1161573 retry.go:31] will retry after 2.962922625s: waiting for domain to come up
	I0904 07:01:16.446718 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:16.447198 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:16.447274 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:16.447188 1161573 retry.go:31] will retry after 4.019296735s: waiting for domain to come up
	I0904 07:01:15.249136 1161036 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.381239637s
	I0904 07:01:16.611320 1161036 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.744293495s
	I0904 07:01:18.367678 1161036 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.501606561s
	I0904 07:01:18.381492 1161036 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 07:01:18.392820 1161036 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 07:01:18.403474 1161036 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 07:01:18.403754 1161036 kubeadm.go:310] [mark-control-plane] Marking the node auto-644084 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 07:01:18.413012 1161036 kubeadm.go:310] [bootstrap-token] Using token: r8f1gr.b3hnw7k15x3h1e9w
	I0904 07:01:18.414245 1161036 out.go:252]   - Configuring RBAC rules ...
	I0904 07:01:18.414400 1161036 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 07:01:18.421710 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 07:01:18.428567 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 07:01:18.434116 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 07:01:18.441965 1161036 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 07:01:18.448501 1161036 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 07:01:18.776213 1161036 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 07:01:19.213174 1161036 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 07:01:19.773634 1161036 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 07:01:19.775511 1161036 kubeadm.go:310] 
	I0904 07:01:19.775626 1161036 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 07:01:19.775639 1161036 kubeadm.go:310] 
	I0904 07:01:19.775761 1161036 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 07:01:19.775778 1161036 kubeadm.go:310] 
	I0904 07:01:19.775816 1161036 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 07:01:19.775904 1161036 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 07:01:19.775994 1161036 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 07:01:19.776020 1161036 kubeadm.go:310] 
	I0904 07:01:19.776118 1161036 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 07:01:19.776134 1161036 kubeadm.go:310] 
	I0904 07:01:19.776202 1161036 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 07:01:19.776213 1161036 kubeadm.go:310] 
	I0904 07:01:19.776302 1161036 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 07:01:19.776424 1161036 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 07:01:19.776532 1161036 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 07:01:19.776568 1161036 kubeadm.go:310] 
	I0904 07:01:19.776697 1161036 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 07:01:19.776792 1161036 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 07:01:19.776802 1161036 kubeadm.go:310] 
	I0904 07:01:19.776923 1161036 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r8f1gr.b3hnw7k15x3h1e9w \
	I0904 07:01:19.777075 1161036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2651308ab51fc83fc020f40c2b31f227a6667a51808f73ed273560ac054e9c36 \
	I0904 07:01:19.777130 1161036 kubeadm.go:310] 	--control-plane 
	I0904 07:01:19.777148 1161036 kubeadm.go:310] 
	I0904 07:01:19.777260 1161036 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 07:01:19.777270 1161036 kubeadm.go:310] 
	I0904 07:01:19.777378 1161036 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r8f1gr.b3hnw7k15x3h1e9w \
	I0904 07:01:19.777556 1161036 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2651308ab51fc83fc020f40c2b31f227a6667a51808f73ed273560ac054e9c36 
	I0904 07:01:19.777724 1161036 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 07:01:19.777739 1161036 cni.go:84] Creating CNI manager for ""
	I0904 07:01:19.777750 1161036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:19.779624 1161036 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 07:01:19.780667 1161036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 07:01:19.795052 1161036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 07:01:19.815906 1161036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 07:01:19.816037 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:19.816077 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-644084 minikube.k8s.io/updated_at=2025_09_04T07_01_19_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=c3fa37de45a2901b215fab008201edf72ce5a1ff minikube.k8s.io/name=auto-644084 minikube.k8s.io/primary=true
	I0904 07:01:19.861305 1161036 ops.go:34] apiserver oom_adj: -16
	I0904 07:01:19.977682 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:20.471560 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:20.471979 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find current IP address of domain kindnet-644084 in network mk-kindnet-644084
	I0904 07:01:20.472003 1161522 main.go:141] libmachine: (kindnet-644084) DBG | I0904 07:01:20.471957 1161573 retry.go:31] will retry after 4.751158317s: waiting for domain to come up
	I0904 07:01:20.478299 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:20.978055 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:21.477975 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:21.978494 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:22.478619 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:22.978475 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:23.477941 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:23.978064 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:24.478754 1161036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 07:01:24.608139 1161036 kubeadm.go:1105] duration metric: took 4.79217007s to wait for elevateKubeSystemPrivileges
	I0904 07:01:24.608193 1161036 kubeadm.go:394] duration metric: took 16.627710729s to StartCluster
	I0904 07:01:24.608222 1161036 settings.go:142] acquiring lock: {Name:mkb015a02541f006ebfff677085f6c9619eaacb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:24.608314 1161036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 07:01:24.609492 1161036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/kubeconfig: {Name:mk586aba4eac8031d07aaf208d256e06f68e9260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:24.609734 1161036 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0904 07:01:24.609747 1161036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 07:01:24.609768 1161036 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 07:01:24.609879 1161036 addons.go:69] Setting storage-provisioner=true in profile "auto-644084"
	I0904 07:01:24.609885 1161036 addons.go:69] Setting default-storageclass=true in profile "auto-644084"
	I0904 07:01:24.609906 1161036 addons.go:238] Setting addon storage-provisioner=true in "auto-644084"
	I0904 07:01:24.609946 1161036 host.go:66] Checking if "auto-644084" exists ...
	I0904 07:01:24.609907 1161036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-644084"
	I0904 07:01:24.610029 1161036 config.go:182] Loaded profile config "auto-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:24.610489 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.610536 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.610489 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.610680 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.611332 1161036 out.go:179] * Verifying Kubernetes components...
	I0904 07:01:24.612700 1161036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:24.626703 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0904 07:01:24.626720 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0904 07:01:24.627295 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.627365 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.627859 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.627883 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.628011 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.628031 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.628267 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.628396 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.628433 1161036 main.go:141] libmachine: (auto-644084) Calling .GetState
	I0904 07:01:24.629001 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.629043 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.632023 1161036 addons.go:238] Setting addon default-storageclass=true in "auto-644084"
	I0904 07:01:24.632060 1161036 host.go:66] Checking if "auto-644084" exists ...
	I0904 07:01:24.632316 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.632357 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.646228 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0904 07:01:24.646795 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.647447 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.647477 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.647954 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.648206 1161036 main.go:141] libmachine: (auto-644084) Calling .GetState
	I0904 07:01:24.649230 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0904 07:01:24.649825 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.650212 1161036 main.go:141] libmachine: (auto-644084) Calling .DriverName
	I0904 07:01:24.650412 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.650441 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.650906 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.651382 1161036 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:24.651417 1161036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:24.652124 1161036 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 07:01:24.653279 1161036 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 07:01:24.653303 1161036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 07:01:24.653327 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHHostname
	I0904 07:01:24.656786 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.657202 1161036 main.go:141] libmachine: (auto-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b9:91", ip: ""} in network mk-auto-644084: {Iface:virbr3 ExpiryTime:2025-09-04 08:00:52 +0000 UTC Type:0 Mac:52:54:00:d7:b9:91 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:auto-644084 Clientid:01:52:54:00:d7:b9:91}
	I0904 07:01:24.657231 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined IP address 192.168.61.91 and MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.657404 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHPort
	I0904 07:01:24.657569 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHKeyPath
	I0904 07:01:24.657756 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHUsername
	I0904 07:01:24.657874 1161036 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/auto-644084/id_rsa Username:docker}
	I0904 07:01:24.672209 1161036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0904 07:01:24.672906 1161036 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:24.673495 1161036 main.go:141] libmachine: Using API Version  1
	I0904 07:01:24.673518 1161036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:24.673854 1161036 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:24.674025 1161036 main.go:141] libmachine: (auto-644084) Calling .GetState
	I0904 07:01:24.675938 1161036 main.go:141] libmachine: (auto-644084) Calling .DriverName
	I0904 07:01:24.676139 1161036 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 07:01:24.676155 1161036 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 07:01:24.676173 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHHostname
	I0904 07:01:24.679460 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.679933 1161036 main.go:141] libmachine: (auto-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:b9:91", ip: ""} in network mk-auto-644084: {Iface:virbr3 ExpiryTime:2025-09-04 08:00:52 +0000 UTC Type:0 Mac:52:54:00:d7:b9:91 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:auto-644084 Clientid:01:52:54:00:d7:b9:91}
	I0904 07:01:24.679951 1161036 main.go:141] libmachine: (auto-644084) DBG | domain auto-644084 has defined IP address 192.168.61.91 and MAC address 52:54:00:d7:b9:91 in network mk-auto-644084
	I0904 07:01:24.680204 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHPort
	I0904 07:01:24.680413 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHKeyPath
	I0904 07:01:24.680589 1161036 main.go:141] libmachine: (auto-644084) Calling .GetSSHUsername
	I0904 07:01:24.680728 1161036 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/auto-644084/id_rsa Username:docker}
	I0904 07:01:24.852927 1161036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 07:01:24.910035 1161036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:01:25.093937 1161036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 07:01:25.114539 1161036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 07:01:25.664882 1161036 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0904 07:01:25.665023 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:25.665110 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:25.665478 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:25.665497 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:25.665507 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:25.665516 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:25.665792 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:25.665830 1161036 main.go:141] libmachine: (auto-644084) DBG | Closing plugin on server side
	I0904 07:01:25.665846 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:25.666207 1161036 node_ready.go:35] waiting up to 15m0s for node "auto-644084" to be "Ready" ...
	I0904 07:01:25.699220 1161036 node_ready.go:49] node "auto-644084" is "Ready"
	I0904 07:01:25.699255 1161036 node_ready.go:38] duration metric: took 33.015328ms for node "auto-644084" to be "Ready" ...
	I0904 07:01:25.699273 1161036 api_server.go:52] waiting for apiserver process to appear ...
	I0904 07:01:25.699333 1161036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 07:01:25.722243 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:25.722272 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:25.722547 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:25.722566 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:25.722582 1161036 main.go:141] libmachine: (auto-644084) DBG | Closing plugin on server side
	I0904 07:01:26.131289 1161036 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.01669606s)
	I0904 07:01:26.131360 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:26.131377 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:26.131394 1161036 api_server.go:72] duration metric: took 1.521629414s to wait for apiserver process to appear ...
	I0904 07:01:26.131420 1161036 api_server.go:88] waiting for apiserver healthz status ...
	I0904 07:01:26.131443 1161036 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0904 07:01:26.131753 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:26.131772 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:26.131782 1161036 main.go:141] libmachine: Making call to close driver server
	I0904 07:01:26.131790 1161036 main.go:141] libmachine: (auto-644084) Calling .Close
	I0904 07:01:26.132079 1161036 main.go:141] libmachine: Successfully made call to close driver server
	I0904 07:01:26.132100 1161036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 07:01:26.133568 1161036 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0904 07:01:27.112230 1161732 start.go:364] duration metric: took 17.088294365s to acquireMachinesLock for "pause-017566"
	I0904 07:01:27.112296 1161732 start.go:96] Skipping create...Using existing machine configuration
	I0904 07:01:27.112305 1161732 fix.go:54] fixHost starting: 
	I0904 07:01:27.112765 1161732 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 07:01:27.112831 1161732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 07:01:27.132201 1161732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0904 07:01:27.132672 1161732 main.go:141] libmachine: () Calling .GetVersion
	I0904 07:01:27.133209 1161732 main.go:141] libmachine: Using API Version  1
	I0904 07:01:27.133241 1161732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 07:01:27.133709 1161732 main.go:141] libmachine: () Calling .GetMachineName
	I0904 07:01:27.133962 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:27.134143 1161732 main.go:141] libmachine: (pause-017566) Calling .GetState
	I0904 07:01:27.136167 1161732 fix.go:112] recreateIfNeeded on pause-017566: state=Running err=<nil>
	W0904 07:01:27.136193 1161732 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 07:01:25.228029 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.228525 1161522 main.go:141] libmachine: (kindnet-644084) found domain IP: 192.168.83.184
	I0904 07:01:25.228550 1161522 main.go:141] libmachine: (kindnet-644084) reserving static IP address...
	I0904 07:01:25.228562 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has current primary IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.229075 1161522 main.go:141] libmachine: (kindnet-644084) DBG | unable to find host DHCP lease matching {name: "kindnet-644084", mac: "52:54:00:f6:90:8a", ip: "192.168.83.184"} in network mk-kindnet-644084
	I0904 07:01:25.307695 1161522 main.go:141] libmachine: (kindnet-644084) reserved static IP address 192.168.83.184 for domain kindnet-644084
	I0904 07:01:25.307728 1161522 main.go:141] libmachine: (kindnet-644084) DBG | Getting to WaitForSSH function...
	I0904 07:01:25.307736 1161522 main.go:141] libmachine: (kindnet-644084) waiting for SSH...
	I0904 07:01:25.310704 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.311278 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.311325 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.311354 1161522 main.go:141] libmachine: (kindnet-644084) DBG | Using SSH client type: external
	I0904 07:01:25.311416 1161522 main.go:141] libmachine: (kindnet-644084) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa (-rw-------)
	I0904 07:01:25.311469 1161522 main.go:141] libmachine: (kindnet-644084) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0904 07:01:25.311493 1161522 main.go:141] libmachine: (kindnet-644084) DBG | About to run SSH command:
	I0904 07:01:25.311507 1161522 main.go:141] libmachine: (kindnet-644084) DBG | exit 0
	I0904 07:01:25.447275 1161522 main.go:141] libmachine: (kindnet-644084) DBG | SSH cmd err, output: <nil>: 
	I0904 07:01:25.447611 1161522 main.go:141] libmachine: (kindnet-644084) KVM machine creation complete
	I0904 07:01:25.447970 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetConfigRaw
	I0904 07:01:25.448694 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:25.448949 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:25.449124 1161522 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0904 07:01:25.449142 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetState
	I0904 07:01:25.450505 1161522 main.go:141] libmachine: Detecting operating system of created instance...
	I0904 07:01:25.450522 1161522 main.go:141] libmachine: Waiting for SSH to be available...
	I0904 07:01:25.450529 1161522 main.go:141] libmachine: Getting to WaitForSSH function...
	I0904 07:01:25.450538 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.453931 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.454362 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.454390 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.454538 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.454745 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.454962 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.455138 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.455391 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.455708 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.455723 1161522 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0904 07:01:25.574620 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:01:25.574651 1161522 main.go:141] libmachine: Detecting the provisioner...
	I0904 07:01:25.574662 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.578426 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.578862 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.578895 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.579097 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.579324 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.579519 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.579700 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.579886 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.580192 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.580207 1161522 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0904 07:01:25.700679 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0904 07:01:25.700771 1161522 main.go:141] libmachine: found compatible host: buildroot
	I0904 07:01:25.700789 1161522 main.go:141] libmachine: Provisioning with buildroot...
	I0904 07:01:25.700803 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetMachineName
	I0904 07:01:25.701110 1161522 buildroot.go:166] provisioning hostname "kindnet-644084"
	I0904 07:01:25.701145 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetMachineName
	I0904 07:01:25.701360 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.704760 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.705200 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.705232 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.705422 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.705590 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.705702 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.705909 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.706130 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.706393 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.706407 1161522 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-644084 && echo "kindnet-644084" | sudo tee /etc/hostname
	I0904 07:01:25.851503 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-644084
	
	I0904 07:01:25.851543 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.855258 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.855671 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.855721 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.855907 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:25.856136 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.856302 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:25.856474 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:25.856635 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:25.856882 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:25.856900 1161522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-644084' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-644084/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-644084' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 07:01:25.982172 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:01:25.982274 1161522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1115845/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1115845/.minikube}
	I0904 07:01:25.982321 1161522 buildroot.go:174] setting up certificates
	I0904 07:01:25.982336 1161522 provision.go:84] configureAuth start
	I0904 07:01:25.982357 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetMachineName
	I0904 07:01:25.982721 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:25.985838 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.986277 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.986329 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.986494 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:25.989308 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.989654 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:25.989727 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:25.989989 1161522 provision.go:143] copyHostCerts
	I0904 07:01:25.990089 1161522 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem, removing ...
	I0904 07:01:25.990111 1161522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem
	I0904 07:01:25.990187 1161522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem (1082 bytes)
	I0904 07:01:25.990343 1161522 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem, removing ...
	I0904 07:01:25.990358 1161522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem
	I0904 07:01:25.990401 1161522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem (1123 bytes)
	I0904 07:01:25.990657 1161522 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem, removing ...
	I0904 07:01:25.990716 1161522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem
	I0904 07:01:25.991341 1161522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem (1679 bytes)
	I0904 07:01:25.991497 1161522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem org=jenkins.kindnet-644084 san=[127.0.0.1 192.168.83.184 kindnet-644084 localhost minikube]
	I0904 07:01:26.364318 1161522 provision.go:177] copyRemoteCerts
	I0904 07:01:26.364435 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 07:01:26.364479 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.367262 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.367608 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.367638 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.367812 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.368060 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.368256 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.368410 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:26.455226 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0904 07:01:26.490466 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 07:01:26.526556 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 07:01:26.562461 1161522 provision.go:87] duration metric: took 580.106076ms to configureAuth
	I0904 07:01:26.562502 1161522 buildroot.go:189] setting minikube options for container-runtime
	I0904 07:01:26.562712 1161522 config.go:182] Loaded profile config "kindnet-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:26.562810 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.566326 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.566743 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.566779 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.566940 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.567186 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.567342 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.567527 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.567748 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:26.568009 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:26.568030 1161522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 07:01:26.842175 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 07:01:26.842220 1161522 main.go:141] libmachine: Checking connection to Docker...
	I0904 07:01:26.842230 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetURL
	I0904 07:01:26.843536 1161522 main.go:141] libmachine: (kindnet-644084) DBG | using libvirt version 6000000
	I0904 07:01:26.845918 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.846295 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.846317 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.846546 1161522 main.go:141] libmachine: Docker is up and running!
	I0904 07:01:26.846562 1161522 main.go:141] libmachine: Reticulating splines...
	I0904 07:01:26.846572 1161522 client.go:171] duration metric: took 25.604002763s to LocalClient.Create
	I0904 07:01:26.846607 1161522 start.go:167] duration metric: took 25.604075218s to libmachine.API.Create "kindnet-644084"
	I0904 07:01:26.846622 1161522 start.go:293] postStartSetup for "kindnet-644084" (driver="kvm2")
	I0904 07:01:26.846636 1161522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 07:01:26.846662 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:26.846938 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 07:01:26.846967 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.849284 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.849629 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.849662 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.849789 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.849985 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.850156 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.850330 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:26.939848 1161522 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 07:01:26.944669 1161522 info.go:137] Remote host: Buildroot 2025.02
	I0904 07:01:26.944695 1161522 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/addons for local assets ...
	I0904 07:01:26.944758 1161522 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/files for local assets ...
	I0904 07:01:26.944832 1161522 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem -> 11200742.pem in /etc/ssl/certs
	I0904 07:01:26.944918 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 07:01:26.956359 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:26.986211 1161522 start.go:296] duration metric: took 139.572703ms for postStartSetup
	I0904 07:01:26.986260 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetConfigRaw
	I0904 07:01:26.986933 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:26.989754 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.990151 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.990197 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.990470 1161522 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/config.json ...
	I0904 07:01:26.990650 1161522 start.go:128] duration metric: took 25.769824603s to createHost
	I0904 07:01:26.990674 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:26.993010 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.993292 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:26.993322 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:26.993510 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:26.993714 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.993881 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:26.994008 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:26.994163 1161522 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:26.994406 1161522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.83.184 22 <nil> <nil>}
	I0904 07:01:26.994423 1161522 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 07:01:27.112031 1161522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756969287.097530347
	
	I0904 07:01:27.112068 1161522 fix.go:216] guest clock: 1756969287.097530347
	I0904 07:01:27.112084 1161522 fix.go:229] Guest: 2025-09-04 07:01:27.097530347 +0000 UTC Remote: 2025-09-04 07:01:26.990662034 +0000 UTC m=+28.660247878 (delta=106.868313ms)
	I0904 07:01:27.112118 1161522 fix.go:200] guest clock delta is within tolerance: 106.868313ms
	I0904 07:01:27.112128 1161522 start.go:83] releasing machines lock for "kindnet-644084", held for 25.891490526s
	I0904 07:01:27.112165 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.112453 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:27.115601 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.116034 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:27.116065 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.116363 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.116925 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.117119 1161522 main.go:141] libmachine: (kindnet-644084) Calling .DriverName
	I0904 07:01:27.117283 1161522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 07:01:27.117340 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:27.117358 1161522 ssh_runner.go:195] Run: cat /version.json
	I0904 07:01:27.117384 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHHostname
	I0904 07:01:27.120542 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.120739 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.121014 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:27.121041 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.121153 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:27.121183 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:27.121215 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:27.121384 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHPort
	I0904 07:01:27.121403 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:27.121546 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHKeyPath
	I0904 07:01:27.121599 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:27.121688 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetSSHUsername
	I0904 07:01:27.121849 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:27.121857 1161522 sshutil.go:53] new ssh client: &{IP:192.168.83.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/kindnet-644084/id_rsa Username:docker}
	I0904 07:01:27.203864 1161522 ssh_runner.go:195] Run: systemctl --version
	I0904 07:01:27.243120 1161522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 07:01:27.405357 1161522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 07:01:27.413608 1161522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 07:01:27.413672 1161522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:01:27.436031 1161522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 07:01:27.436060 1161522 start.go:495] detecting cgroup driver to use...
	I0904 07:01:27.436135 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 07:01:27.457457 1161522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 07:01:27.475563 1161522 docker.go:218] disabling cri-docker service (if available) ...
	I0904 07:01:27.475657 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 07:01:27.497358 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 07:01:27.515372 1161522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 07:01:27.689461 1161522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 07:01:27.848740 1161522 docker.go:234] disabling docker service ...
	I0904 07:01:27.848817 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 07:01:27.865300 1161522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 07:01:27.879500 1161522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 07:01:28.097419 1161522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0904 07:01:28.245948 1161522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 07:01:28.262314 1161522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 07:01:28.285064 1161522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 07:01:28.285155 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.297810 1161522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 07:01:28.297898 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.311022 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.323750 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.336904 1161522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 07:01:28.349729 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.360806 1161522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.379317 1161522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:28.390715 1161522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 07:01:28.400095 1161522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 07:01:28.400167 1161522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 07:01:28.417964 1161522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 07:01:28.428649 1161522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:28.574072 1161522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 07:01:28.680048 1161522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 07:01:28.680129 1161522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 07:01:28.684968 1161522 start.go:563] Will wait 60s for crictl version
	I0904 07:01:28.685019 1161522 ssh_runner.go:195] Run: which crictl
	I0904 07:01:28.688871 1161522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 07:01:28.726950 1161522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 07:01:28.727024 1161522 ssh_runner.go:195] Run: crio --version
	I0904 07:01:28.755077 1161522 ssh_runner.go:195] Run: crio --version
	I0904 07:01:28.783931 1161522 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 07:01:27.138268 1161732 out.go:252] * Updating the running kvm2 "pause-017566" VM ...
	I0904 07:01:27.138298 1161732 machine.go:93] provisionDockerMachine start ...
	I0904 07:01:27.138313 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:27.138518 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.141213 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.141742 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.141767 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.142052 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.142211 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.142329 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.142435 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.142642 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.142939 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.142951 1161732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 07:01:27.264475 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-017566
	
	I0904 07:01:27.264509 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.264829 1161732 buildroot.go:166] provisioning hostname "pause-017566"
	I0904 07:01:27.264868 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.265100 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.268258 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.268727 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.268755 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.268949 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.269134 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.269298 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.269460 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.269625 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.269851 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.269866 1161732 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-017566 && echo "pause-017566" | sudo tee /etc/hostname
	I0904 07:01:27.402385 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-017566
	
	I0904 07:01:27.402422 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.406417 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.406873 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.406898 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.407170 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.407411 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.407590 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.407783 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.408014 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.408402 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.408442 1161732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-017566' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-017566/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-017566' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 07:01:27.536058 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 07:01:27.536105 1161732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-1115845/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-1115845/.minikube}
	I0904 07:01:27.536138 1161732 buildroot.go:174] setting up certificates
	I0904 07:01:27.536156 1161732 provision.go:84] configureAuth start
	I0904 07:01:27.536176 1161732 main.go:141] libmachine: (pause-017566) Calling .GetMachineName
	I0904 07:01:27.536479 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:27.539375 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.539785 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.539812 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.540030 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.542344 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.542629 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.542667 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.542888 1161732 provision.go:143] copyHostCerts
	I0904 07:01:27.542988 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem, removing ...
	I0904 07:01:27.543011 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem
	I0904 07:01:27.543079 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/key.pem (1679 bytes)
	I0904 07:01:27.543198 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem, removing ...
	I0904 07:01:27.543210 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem
	I0904 07:01:27.543244 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.pem (1082 bytes)
	I0904 07:01:27.543319 1161732 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem, removing ...
	I0904 07:01:27.543330 1161732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem
	I0904 07:01:27.543357 1161732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-1115845/.minikube/cert.pem (1123 bytes)
	I0904 07:01:27.543418 1161732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem org=jenkins.pause-017566 san=[127.0.0.1 192.168.39.168 localhost minikube pause-017566]
	I0904 07:01:27.703670 1161732 provision.go:177] copyRemoteCerts
	I0904 07:01:27.703728 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 07:01:27.703755 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.706487 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.706849 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.706884 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.707049 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.707243 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.707437 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.707651 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:27.798776 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 07:01:27.833553 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0904 07:01:27.865385 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 07:01:27.894589 1161732 provision.go:87] duration metric: took 358.411244ms to configureAuth
	I0904 07:01:27.894626 1161732 buildroot.go:189] setting minikube options for container-runtime
	I0904 07:01:27.894995 1161732 config.go:182] Loaded profile config "pause-017566": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 07:01:27.895097 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:27.898221 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.898667 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:27.898715 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:27.898935 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:27.899156 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.899364 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:27.899545 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:27.899735 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:27.899945 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:27.899959 1161732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0904 07:01:26.134641 1161036 addons.go:514] duration metric: took 1.524876476s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0904 07:01:26.152041 1161036 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0904 07:01:26.153203 1161036 api_server.go:141] control plane version: v1.34.0
	I0904 07:01:26.153239 1161036 api_server.go:131] duration metric: took 21.810181ms to wait for apiserver health ...
	I0904 07:01:26.153253 1161036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 07:01:26.160679 1161036 system_pods.go:59] 8 kube-system pods found
	I0904 07:01:26.160717 1161036 system_pods.go:61] "coredns-66bc5c9577-lq225" [060ef169-5f90-41ea-92b3-1bdfc4cdb068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.160727 1161036 system_pods.go:61] "coredns-66bc5c9577-qrmhs" [615315ae-405b-4992-841c-24f070bdb631] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.160735 1161036 system_pods.go:61] "etcd-auto-644084" [6220c93f-264a-4011-be10-58b5f20081b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 07:01:26.160741 1161036 system_pods.go:61] "kube-apiserver-auto-644084" [092f42d8-0f0c-4a20-aced-13411a94e4fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 07:01:26.160745 1161036 system_pods.go:61] "kube-controller-manager-auto-644084" [b14e2b6f-66d6-4801-8d33-6e2f9e762a76] Running
	I0904 07:01:26.160751 1161036 system_pods.go:61] "kube-proxy-fqgp9" [4ab5cafd-94f5-4b23-8026-8208fb8ce408] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 07:01:26.160756 1161036 system_pods.go:61] "kube-scheduler-auto-644084" [b660931a-535c-470b-a314-68e4955c9af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 07:01:26.160759 1161036 system_pods.go:61] "storage-provisioner" [2800a949-a2c3-4230-8f32-064780c523fb] Pending
	I0904 07:01:26.160766 1161036 system_pods.go:74] duration metric: took 7.506067ms to wait for pod list to return data ...
	I0904 07:01:26.160775 1161036 default_sa.go:34] waiting for default service account to be created ...
	I0904 07:01:26.166881 1161036 default_sa.go:45] found service account: "default"
	I0904 07:01:26.166909 1161036 default_sa.go:55] duration metric: took 6.127726ms for default service account to be created ...
	I0904 07:01:26.166929 1161036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 07:01:26.170675 1161036 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-644084" context rescaled to 1 replicas
	I0904 07:01:26.174853 1161036 system_pods.go:86] 8 kube-system pods found
	I0904 07:01:26.174887 1161036 system_pods.go:89] "coredns-66bc5c9577-lq225" [060ef169-5f90-41ea-92b3-1bdfc4cdb068] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.174896 1161036 system_pods.go:89] "coredns-66bc5c9577-qrmhs" [615315ae-405b-4992-841c-24f070bdb631] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.174907 1161036 system_pods.go:89] "etcd-auto-644084" [6220c93f-264a-4011-be10-58b5f20081b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 07:01:26.174921 1161036 system_pods.go:89] "kube-apiserver-auto-644084" [092f42d8-0f0c-4a20-aced-13411a94e4fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 07:01:26.174928 1161036 system_pods.go:89] "kube-controller-manager-auto-644084" [b14e2b6f-66d6-4801-8d33-6e2f9e762a76] Running
	I0904 07:01:26.174937 1161036 system_pods.go:89] "kube-proxy-fqgp9" [4ab5cafd-94f5-4b23-8026-8208fb8ce408] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 07:01:26.174950 1161036 system_pods.go:89] "kube-scheduler-auto-644084" [b660931a-535c-470b-a314-68e4955c9af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 07:01:26.174967 1161036 system_pods.go:89] "storage-provisioner" [2800a949-a2c3-4230-8f32-064780c523fb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 07:01:26.175002 1161036 retry.go:31] will retry after 251.951297ms: missing components: kube-dns, kube-proxy
	I0904 07:01:26.430955 1161036 system_pods.go:86] 8 kube-system pods found
	I0904 07:01:26.430994 1161036 system_pods.go:89] "coredns-66bc5c9577-lq225" [060ef169-5f90-41ea-92b3-1bdfc4cdb068] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.431013 1161036 system_pods.go:89] "coredns-66bc5c9577-qrmhs" [615315ae-405b-4992-841c-24f070bdb631] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 07:01:26.431024 1161036 system_pods.go:89] "etcd-auto-644084" [6220c93f-264a-4011-be10-58b5f20081b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 07:01:26.431030 1161036 system_pods.go:89] "kube-apiserver-auto-644084" [092f42d8-0f0c-4a20-aced-13411a94e4fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 07:01:26.431036 1161036 system_pods.go:89] "kube-controller-manager-auto-644084" [b14e2b6f-66d6-4801-8d33-6e2f9e762a76] Running
	I0904 07:01:26.431041 1161036 system_pods.go:89] "kube-proxy-fqgp9" [4ab5cafd-94f5-4b23-8026-8208fb8ce408] Running
	I0904 07:01:26.431048 1161036 system_pods.go:89] "kube-scheduler-auto-644084" [b660931a-535c-470b-a314-68e4955c9af9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 07:01:26.431055 1161036 system_pods.go:89] "storage-provisioner" [2800a949-a2c3-4230-8f32-064780c523fb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 07:01:26.431069 1161036 system_pods.go:126] duration metric: took 264.132562ms to wait for k8s-apps to be running ...
	I0904 07:01:26.431085 1161036 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 07:01:26.431154 1161036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 07:01:26.451788 1161036 system_svc.go:56] duration metric: took 20.689252ms WaitForService to wait for kubelet
	I0904 07:01:26.451835 1161036 kubeadm.go:578] duration metric: took 1.842072357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 07:01:26.451863 1161036 node_conditions.go:102] verifying NodePressure condition ...
	I0904 07:01:26.461327 1161036 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 07:01:26.461377 1161036 node_conditions.go:123] node cpu capacity is 2
	I0904 07:01:26.461396 1161036 node_conditions.go:105] duration metric: took 9.526558ms to run NodePressure ...
	I0904 07:01:26.461414 1161036 start.go:241] waiting for startup goroutines ...
	I0904 07:01:26.461432 1161036 start.go:246] waiting for cluster config update ...
	I0904 07:01:26.461449 1161036 start.go:255] writing updated cluster config ...
	I0904 07:01:26.461794 1161036 ssh_runner.go:195] Run: rm -f paused
	I0904 07:01:26.471380 1161036 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 07:01:26.476749 1161036 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lq225" in "kube-system" namespace to be "Ready" or be gone ...
	W0904 07:01:28.482448 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	I0904 07:01:28.785014 1161522 main.go:141] libmachine: (kindnet-644084) Calling .GetIP
	I0904 07:01:28.787980 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:28.788404 1161522 main.go:141] libmachine: (kindnet-644084) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:90:8a", ip: ""} in network mk-kindnet-644084: {Iface:virbr4 ExpiryTime:2025-09-04 08:01:16 +0000 UTC Type:0 Mac:52:54:00:f6:90:8a Iaid: IPaddr:192.168.83.184 Prefix:24 Hostname:kindnet-644084 Clientid:01:52:54:00:f6:90:8a}
	I0904 07:01:28.788434 1161522 main.go:141] libmachine: (kindnet-644084) DBG | domain kindnet-644084 has defined IP address 192.168.83.184 and MAC address 52:54:00:f6:90:8a in network mk-kindnet-644084
	I0904 07:01:28.788705 1161522 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0904 07:01:28.792973 1161522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 07:01:28.807240 1161522 kubeadm.go:875] updating cluster {Name:kindnet-644084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:kindnet-644084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.184 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 07:01:28.807354 1161522 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:28.807400 1161522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:28.840993 1161522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0904 07:01:28.841071 1161522 ssh_runner.go:195] Run: which lz4
	I0904 07:01:28.845181 1161522 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 07:01:28.849589 1161522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 07:01:28.849627 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0904 07:01:30.209516 1161522 crio.go:462] duration metric: took 1.36436722s to copy over tarball
	I0904 07:01:30.209594 1161522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 07:01:31.972342 1161522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.762712301s)
	I0904 07:01:31.972381 1161522 crio.go:469] duration metric: took 1.762831752s to extract the tarball
	I0904 07:01:31.972405 1161522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 07:01:32.016595 1161522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:32.060481 1161522 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:32.060508 1161522 cache_images.go:85] Images are preloaded, skipping loading
	I0904 07:01:32.060518 1161522 kubeadm.go:926] updating node { 192.168.83.184 8443 v1.34.0 crio true true} ...
	I0904 07:01:32.060687 1161522 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-644084 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-644084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0904 07:01:32.060797 1161522 ssh_runner.go:195] Run: crio config
	I0904 07:01:32.108719 1161522 cni.go:84] Creating CNI manager for "kindnet"
	I0904 07:01:32.108815 1161522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 07:01:32.108857 1161522 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.184 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-644084 NodeName:kindnet-644084 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 07:01:32.109098 1161522 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-644084"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.184"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.184"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 07:01:32.109217 1161522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 07:01:32.121186 1161522 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 07:01:32.121288 1161522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 07:01:32.132241 1161522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0904 07:01:32.152136 1161522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 07:01:32.173828 1161522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0904 07:01:32.192239 1161522 ssh_runner.go:195] Run: grep 192.168.83.184	control-plane.minikube.internal$ /etc/hosts
	I0904 07:01:32.195961 1161522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 07:01:32.210154 1161522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:32.355287 1161522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:01:32.390149 1161522 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084 for IP: 192.168.83.184
	I0904 07:01:32.390190 1161522 certs.go:194] generating shared ca certs ...
	I0904 07:01:32.390217 1161522 certs.go:226] acquiring lock for ca certs: {Name:mkb48abb711128619cd278e65e40c326a6b20d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.390458 1161522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key
	I0904 07:01:32.390524 1161522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key
	I0904 07:01:32.390542 1161522 certs.go:256] generating profile certs ...
	I0904 07:01:32.390616 1161522 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.key
	I0904 07:01:32.390640 1161522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt with IP's: []
	I0904 07:01:32.498401 1161522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt ...
	I0904 07:01:32.498433 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: {Name:mk8af1151167c6e0451312073e46d6b07e92c708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.498603 1161522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.key ...
	I0904 07:01:32.498613 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.key: {Name:mk995c07d994cb142636879d119a9beafc08719c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.498698 1161522 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7
	I0904 07:01:32.498714 1161522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.184]
	I0904 07:01:32.623726 1161522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7 ...
	I0904 07:01:32.623759 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7: {Name:mkcc545f92daa830e262441c44ee9cb94ed51df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.623923 1161522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7 ...
	I0904 07:01:32.623938 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7: {Name:mkceeeeeb72bf43d2a7b5cbec52c04225f142b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.624014 1161522 certs.go:381] copying /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt.67b6fcd7 -> /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt
	I0904 07:01:32.624086 1161522 certs.go:385] copying /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key.67b6fcd7 -> /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key
	I0904 07:01:32.624138 1161522 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key
	I0904 07:01:32.624158 1161522 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt with IP's: []
	I0904 07:01:32.811341 1161522 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt ...
	I0904 07:01:32.811375 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt: {Name:mk73c63e2a6016f2fab5cda0d37845d338b66f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.811537 1161522 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key ...
	I0904 07:01:32.811554 1161522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key: {Name:mk64324bb782cd5fc411a021c50384d975d8d985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:32.811757 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem (1338 bytes)
	W0904 07:01:32.811798 1161522 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074_empty.pem, impossibly tiny 0 bytes
	I0904 07:01:32.811808 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 07:01:32.811828 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem (1082 bytes)
	I0904 07:01:32.811851 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem (1123 bytes)
	I0904 07:01:32.811872 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem (1679 bytes)
	I0904 07:01:32.811907 1161522 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:32.812499 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 07:01:32.841059 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 07:01:32.867610 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 07:01:32.896130 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 07:01:32.922646 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 07:01:32.949599 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 07:01:32.975535 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 07:01:33.002769 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 07:01:33.028991 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem --> /usr/share/ca-certificates/1120074.pem (1338 bytes)
	I0904 07:01:33.057026 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /usr/share/ca-certificates/11200742.pem (1708 bytes)
	I0904 07:01:33.090886 1161522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 07:01:33.135248 1161522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 07:01:33.159742 1161522 ssh_runner.go:195] Run: openssl version
	I0904 07:01:33.167417 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120074.pem && ln -fs /usr/share/ca-certificates/1120074.pem /etc/ssl/certs/1120074.pem"
	I0904 07:01:33.184381 1161522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120074.pem
	I0904 07:01:33.191187 1161522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:04 /usr/share/ca-certificates/1120074.pem
	I0904 07:01:33.191267 1161522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120074.pem
	I0904 07:01:33.199575 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120074.pem /etc/ssl/certs/51391683.0"
	I0904 07:01:33.212086 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11200742.pem && ln -fs /usr/share/ca-certificates/11200742.pem /etc/ssl/certs/11200742.pem"
	I0904 07:01:33.225087 1161522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11200742.pem
	I0904 07:01:33.230385 1161522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:04 /usr/share/ca-certificates/11200742.pem
	I0904 07:01:33.230444 1161522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11200742.pem
	I0904 07:01:33.237382 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11200742.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 07:01:33.250165 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 07:01:33.268084 1161522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:33.274327 1161522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 05:54 /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:33.274407 1161522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:33.283423 1161522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 07:01:33.300612 1161522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 07:01:33.305295 1161522 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 07:01:33.305369 1161522 kubeadm.go:392] StartCluster: {Name:kindnet-644084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:kindnet-644084 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.184 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:33.305481 1161522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 07:01:33.305544 1161522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 07:01:33.347155 1161522 cri.go:89] found id: ""
	I0904 07:01:33.347227 1161522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 07:01:33.359854 1161522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 07:01:33.371988 1161522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 07:01:33.384104 1161522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 07:01:33.384129 1161522 kubeadm.go:157] found existing configuration files:
	
	I0904 07:01:33.384183 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 07:01:33.395069 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 07:01:33.395142 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 07:01:33.407195 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 07:01:33.418297 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 07:01:33.418386 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 07:01:33.431714 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 07:01:33.446566 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 07:01:33.446621 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 07:01:33.459119 1161522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 07:01:33.470247 1161522 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 07:01:33.470363 1161522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 07:01:33.485249 1161522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 07:01:33.544936 1161522 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 07:01:33.545010 1161522 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 07:01:33.654436 1161522 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 07:01:33.654595 1161522 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 07:01:33.654762 1161522 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 07:01:33.665066 1161522 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 07:01:33.515165 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0904 07:01:33.515196 1161732 machine.go:96] duration metric: took 6.376888505s to provisionDockerMachine
	I0904 07:01:33.515212 1161732 start.go:293] postStartSetup for "pause-017566" (driver="kvm2")
	I0904 07:01:33.515226 1161732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 07:01:33.515249 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.515626 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 07:01:33.515661 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.519114 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.519592 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.519624 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.519795 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.519977 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.520206 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.520390 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.610679 1161732 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 07:01:33.616704 1161732 info.go:137] Remote host: Buildroot 2025.02
	I0904 07:01:33.616739 1161732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/addons for local assets ...
	I0904 07:01:33.616814 1161732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-1115845/.minikube/files for local assets ...
	I0904 07:01:33.616905 1161732 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem -> 11200742.pem in /etc/ssl/certs
	I0904 07:01:33.617040 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 07:01:33.631551 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:33.665307 1161732 start.go:296] duration metric: took 150.079866ms for postStartSetup
	I0904 07:01:33.665355 1161732 fix.go:56] duration metric: took 6.553050716s for fixHost
	I0904 07:01:33.665388 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.669609 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.670031 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.670076 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.670271 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.670479 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.670680 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.670879 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.671044 1161732 main.go:141] libmachine: Using SSH client type: native
	I0904 07:01:33.671293 1161732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840660] 0x843360 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0904 07:01:33.671311 1161732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 07:01:33.787999 1161732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756969293.783889209
	
	I0904 07:01:33.788029 1161732 fix.go:216] guest clock: 1756969293.783889209
	I0904 07:01:33.788040 1161732 fix.go:229] Guest: 2025-09-04 07:01:33.783889209 +0000 UTC Remote: 2025-09-04 07:01:33.665366067 +0000 UTC m=+23.813013966 (delta=118.523142ms)
	I0904 07:01:33.788068 1161732 fix.go:200] guest clock delta is within tolerance: 118.523142ms
	I0904 07:01:33.788076 1161732 start.go:83] releasing machines lock for "pause-017566", held for 6.675805339s
	I0904 07:01:33.788102 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.788408 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:33.791521 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.791914 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.791977 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.792095 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792611 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792808 1161732 main.go:141] libmachine: (pause-017566) Calling .DriverName
	I0904 07:01:33.792932 1161732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 07:01:33.792992 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.793044 1161732 ssh_runner.go:195] Run: cat /version.json
	I0904 07:01:33.793087 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHHostname
	I0904 07:01:33.795985 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796378 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.796407 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796428 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.796674 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.796854 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.796939 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:33.796976 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:33.797029 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.797123 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHPort
	I0904 07:01:33.797170 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.797245 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHKeyPath
	I0904 07:01:33.797390 1161732 main.go:141] libmachine: (pause-017566) Calling .GetSSHUsername
	I0904 07:01:33.797564 1161732 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/pause-017566/id_rsa Username:docker}
	I0904 07:01:33.916461 1161732 ssh_runner.go:195] Run: systemctl --version
	I0904 07:01:33.922526 1161732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0904 07:01:34.076454 1161732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 07:01:34.087525 1161732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 07:01:34.087620 1161732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 07:01:34.098978 1161732 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0904 07:01:34.099005 1161732 start.go:495] detecting cgroup driver to use...
	I0904 07:01:34.099086 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 07:01:34.120306 1161732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 07:01:34.137553 1161732 docker.go:218] disabling cri-docker service (if available) ...
	I0904 07:01:34.137664 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0904 07:01:34.154114 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0904 07:01:34.169285 1161732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0904 07:01:34.345407 1161732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0904 07:01:34.520424 1161732 docker.go:234] disabling docker service ...
	I0904 07:01:34.520502 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0904 07:01:34.550550 1161732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0904 07:01:34.565558 1161732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0904 07:01:34.746021 1161732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	W0904 07:01:30.483336 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	W0904 07:01:32.982861 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	I0904 07:01:33.667105 1161522 out.go:252]   - Generating certificates and keys ...
	I0904 07:01:33.667204 1161522 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 07:01:33.667291 1161522 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 07:01:34.195279 1161522 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 07:01:34.339684 1161522 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 07:01:34.516257 1161522 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 07:01:34.542907 1161522 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 07:01:34.820712 1161522 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 07:01:34.821029 1161522 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-644084 localhost] and IPs [192.168.83.184 127.0.0.1 ::1]
	I0904 07:01:34.936222 1161522 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 07:01:34.936602 1161522 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-644084 localhost] and IPs [192.168.83.184 127.0.0.1 ::1]
	I0904 07:01:35.189688 1161522 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 07:01:35.868146 1161522 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 07:01:35.933872 1161522 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 07:01:35.934204 1161522 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 07:01:36.288723 1161522 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 07:01:36.560107 1161522 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 07:01:36.660548 1161522 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 07:01:37.046981 1161522 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 07:01:37.442478 1161522 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 07:01:37.442906 1161522 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 07:01:37.446364 1161522 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 07:01:37.448049 1161522 out.go:252]   - Booting up control plane ...
	I0904 07:01:37.448193 1161522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 07:01:37.448299 1161522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 07:01:37.448394 1161522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 07:01:37.473928 1161522 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 07:01:37.474061 1161522 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 07:01:37.481715 1161522 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 07:01:37.482086 1161522 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 07:01:37.482267 1161522 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 07:01:37.662999 1161522 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 07:01:37.663220 1161522 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 07:01:34.918646 1161732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 07:01:34.936473 1161732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 07:01:34.964184 1161732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0904 07:01:34.964265 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:34.976814 1161732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0904 07:01:34.976888 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:34.989396 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.002104 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.014978 1161732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 07:01:35.027454 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.044316 1161732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.058383 1161732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0904 07:01:35.070619 1161732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 07:01:35.081214 1161732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 07:01:35.096031 1161732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 07:01:35.271583 1161732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0904 07:01:39.545989 1161732 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.274359891s)
	I0904 07:01:39.546026 1161732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0904 07:01:39.546098 1161732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0904 07:01:39.551592 1161732 start.go:563] Will wait 60s for crictl version
	I0904 07:01:39.551658 1161732 ssh_runner.go:195] Run: which crictl
	I0904 07:01:39.555911 1161732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 07:01:39.593817 1161732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0904 07:01:39.593911 1161732 ssh_runner.go:195] Run: crio --version
	I0904 07:01:39.623039 1161732 ssh_runner.go:195] Run: crio --version
	I0904 07:01:39.661659 1161732 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0904 07:01:39.662705 1161732 main.go:141] libmachine: (pause-017566) Calling .GetIP
	I0904 07:01:39.666104 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:39.666530 1161732 main.go:141] libmachine: (pause-017566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:68:c3", ip: ""} in network mk-pause-017566: {Iface:virbr1 ExpiryTime:2025-09-04 08:00:00 +0000 UTC Type:0 Mac:52:54:00:05:68:c3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:pause-017566 Clientid:01:52:54:00:05:68:c3}
	I0904 07:01:39.666563 1161732 main.go:141] libmachine: (pause-017566) DBG | domain pause-017566 has defined IP address 192.168.39.168 and MAC address 52:54:00:05:68:c3 in network mk-pause-017566
	I0904 07:01:39.666943 1161732 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 07:01:39.672719 1161732 kubeadm.go:875] updating cluster {Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 07:01:39.672897 1161732 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 07:01:39.672947 1161732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:39.714651 1161732 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:39.714676 1161732 crio.go:433] Images already preloaded, skipping extraction
	I0904 07:01:39.714749 1161732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0904 07:01:39.751978 1161732 crio.go:514] all images are preloaded for cri-o runtime.
	I0904 07:01:39.752004 1161732 cache_images.go:85] Images are preloaded, skipping loading
	I0904 07:01:39.752012 1161732 kubeadm.go:926] updating node { 192.168.39.168 8443 v1.34.0 crio true true} ...
	I0904 07:01:39.752114 1161732 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-017566 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 07:01:39.752179 1161732 ssh_runner.go:195] Run: crio config
	I0904 07:01:39.795416 1161732 cni.go:84] Creating CNI manager for ""
	I0904 07:01:39.795443 1161732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 07:01:39.795458 1161732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 07:01:39.795500 1161732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-017566 NodeName:pause-017566 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 07:01:39.795668 1161732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-017566"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.168"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 07:01:39.795740 1161732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 07:01:39.807142 1161732 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 07:01:39.807227 1161732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 07:01:39.818028 1161732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0904 07:01:39.841592 1161732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 07:01:39.863014 1161732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0904 07:01:39.882663 1161732 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I0904 07:01:39.886632 1161732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0904 07:01:35.217339 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	W0904 07:01:37.482464 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	W0904 07:01:39.484678 1161036 pod_ready.go:104] pod "coredns-66bc5c9577-lq225" is not "Ready", error: <nil>
	I0904 07:01:39.662705 1161522 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001321555s
	I0904 07:01:39.666939 1161522 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 07:01:39.667063 1161522 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.83.184:8443/livez
	I0904 07:01:39.667180 1161522 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 07:01:39.667284 1161522 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 07:01:42.224101 1161522 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.557434475s
	I0904 07:01:43.264683 1161522 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.597296057s
	I0904 07:01:40.059102 1161732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 07:01:40.075459 1161732 certs.go:68] Setting up /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566 for IP: 192.168.39.168
	I0904 07:01:40.075502 1161732 certs.go:194] generating shared ca certs ...
	I0904 07:01:40.075538 1161732 certs.go:226] acquiring lock for ca certs: {Name:mkb48abb711128619cd278e65e40c326a6b20d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 07:01:40.075768 1161732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key
	I0904 07:01:40.075842 1161732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key
	I0904 07:01:40.075862 1161732 certs.go:256] generating profile certs ...
	I0904 07:01:40.075981 1161732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/client.key
	I0904 07:01:40.076067 1161732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.key.46bf764b
	I0904 07:01:40.076144 1161732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.key
	I0904 07:01:40.076287 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem (1338 bytes)
	W0904 07:01:40.076327 1161732 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074_empty.pem, impossibly tiny 0 bytes
	I0904 07:01:40.076340 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca-key.pem (1675 bytes)
	I0904 07:01:40.076373 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/ca.pem (1082 bytes)
	I0904 07:01:40.076404 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/cert.pem (1123 bytes)
	I0904 07:01:40.076436 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/key.pem (1679 bytes)
	I0904 07:01:40.076497 1161732 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem (1708 bytes)
	I0904 07:01:40.077172 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 07:01:40.108154 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 07:01:40.136983 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 07:01:40.167004 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 07:01:40.199411 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 07:01:40.229354 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 07:01:40.263364 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 07:01:40.294718 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/pause-017566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0904 07:01:40.329466 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/ssl/certs/11200742.pem --> /usr/share/ca-certificates/11200742.pem (1708 bytes)
	I0904 07:01:40.363576 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 07:01:40.396318 1161732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-1115845/.minikube/certs/1120074.pem --> /usr/share/ca-certificates/1120074.pem (1338 bytes)
	I0904 07:01:40.430931 1161732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 07:01:40.452998 1161732 ssh_runner.go:195] Run: openssl version
	I0904 07:01:40.461063 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11200742.pem && ln -fs /usr/share/ca-certificates/11200742.pem /etc/ssl/certs/11200742.pem"
	I0904 07:01:40.477331 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.492886 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  4 06:04 /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.493057 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11200742.pem
	I0904 07:01:40.508368 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11200742.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 07:01:40.573215 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 07:01:40.592349 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.603505 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 05:54 /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.603580 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 07:01:40.621795 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 07:01:40.656205 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1120074.pem && ln -fs /usr/share/ca-certificates/1120074.pem /etc/ssl/certs/1120074.pem"
	I0904 07:01:40.689203 1161732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.700628 1161732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  4 06:04 /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.700733 1161732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1120074.pem
	I0904 07:01:40.718305 1161732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1120074.pem /etc/ssl/certs/51391683.0"
	I0904 07:01:40.748388 1161732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 07:01:40.764024 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 07:01:40.790149 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 07:01:40.806535 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 07:01:40.822778 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 07:01:40.836036 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 07:01:40.848094 1161732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 07:01:40.861650 1161732 kubeadm.go:392] StartCluster: {Name:pause-017566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-017566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 07:01:40.861903 1161732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0904 07:01:40.862007 1161732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0904 07:01:40.952656 1161732 cri.go:89] found id: "7bd228eee0c8478996d5e834f0e01320ec10565c851fb545d08f599c036f664e"
	I0904 07:01:40.952687 1161732 cri.go:89] found id: "bb4a7e0352be4102c6ffc78172d580c052dba2d2803d939ac1ad23e45e8677ca"
	I0904 07:01:40.952692 1161732 cri.go:89] found id: "0b029332740d46dc6f0939ada2079b4939254cb16a68486524aa04a27a2b6bcf"
	I0904 07:01:40.952697 1161732 cri.go:89] found id: "b880e684a6e0d5818a2df4915f902ea1940a2b8fab778c808806680aa4d82037"
	I0904 07:01:40.952702 1161732 cri.go:89] found id: "143324528cf349785e87b806fa537a8990761956d653c2efad7cbd0eba68feb9"
	I0904 07:01:40.952707 1161732 cri.go:89] found id: "6f3f77c12db6e0e60d13e8d3c64818d2d235cc405b125f184aa5dc00f939cd6a"
	I0904 07:01:40.952711 1161732 cri.go:89] found id: ""
	I0904 07:01:40.952765 1161732 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-017566 -n pause-017566
helpers_test.go:269: (dbg) Run:  kubectl --context pause-017566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (56.32s)

                                                
                                    

Test pass (279/323)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.1
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 13.07
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.1
18 TestDownloadOnly/v1.34.0/DeleteAll 0.14
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 56.87
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 199.22
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.49
35 TestAddons/parallel/Registry 18.32
36 TestAddons/parallel/RegistryCreds 0.78
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 6.27
41 TestAddons/parallel/CSI 60.16
42 TestAddons/parallel/Headlamp 18.96
43 TestAddons/parallel/CloudSpanner 6.95
44 TestAddons/parallel/LocalPath 16.46
45 TestAddons/parallel/NvidiaDevicePlugin 6.72
46 TestAddons/parallel/Yakd 11.3
48 TestAddons/StoppedEnableDisable 91.27
49 TestCertOptions 62.33
50 TestCertExpiration 278.82
52 TestForceSystemdFlag 87.31
53 TestForceSystemdEnv 77.27
55 TestKVMDriverInstallOrUpdate 1.91
59 TestErrorSpam/setup 42.93
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.8
62 TestErrorSpam/pause 1.69
63 TestErrorSpam/unpause 1.84
64 TestErrorSpam/stop 94.3
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 87.99
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 36.92
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
76 TestFunctional/serial/CacheCmd/cache/add_local 2.11
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 38.99
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.47
87 TestFunctional/serial/LogsFileCmd 1.42
88 TestFunctional/serial/InvalidService 4.25
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 13.41
92 TestFunctional/parallel/DryRun 0.33
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 0.93
98 TestFunctional/parallel/ServiceCmdConnect 17.49
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 39.39
102 TestFunctional/parallel/SSHCmd 0.43
103 TestFunctional/parallel/CpCmd 1.33
104 TestFunctional/parallel/MySQL 22.5
105 TestFunctional/parallel/FileSync 0.22
106 TestFunctional/parallel/CertSync 1.35
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
114 TestFunctional/parallel/License 0.28
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.71
130 TestFunctional/parallel/ImageCommands/Setup 1.76
131 TestFunctional/parallel/ProfileCmd/profile_list 0.37
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
137 TestFunctional/parallel/MountCmd/any-port 17.46
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.23
139 TestFunctional/parallel/Version/short 0.05
140 TestFunctional/parallel/Version/components 0.45
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.24
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 6.82
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.12
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
146 TestFunctional/parallel/MountCmd/specific-port 1.74
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.33
148 TestFunctional/parallel/ServiceCmd/DeployApp 11.42
149 TestFunctional/parallel/ServiceCmd/List 1.3
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
152 TestFunctional/parallel/ServiceCmd/Format 0.32
153 TestFunctional/parallel/ServiceCmd/URL 0.34
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 250.44
162 TestMultiControlPlane/serial/DeployApp 6.92
163 TestMultiControlPlane/serial/PingHostFromPods 1.21
164 TestMultiControlPlane/serial/AddWorkerNode 52.67
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
167 TestMultiControlPlane/serial/CopyFile 13.54
168 TestMultiControlPlane/serial/StopSecondaryNode 91.68
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 32.49
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 409.84
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.43
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
175 TestMultiControlPlane/serial/StopCluster 272.75
176 TestMultiControlPlane/serial/RestartCluster 108.57
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
178 TestMultiControlPlane/serial/AddSecondaryNode 77.14
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
183 TestJSONOutput/start/Command 92.86
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.69
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.34
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 94.54
215 TestMountStart/serial/StartWithMountFirst 27.89
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 29.03
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 1.02
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.7
222 TestMountStart/serial/RestartStopped 23.7
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 111.94
227 TestMultiNode/serial/DeployApp2Nodes 5.8
228 TestMultiNode/serial/PingHostFrom2Pods 0.78
229 TestMultiNode/serial/AddNode 50.29
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.6
232 TestMultiNode/serial/CopyFile 7.41
233 TestMultiNode/serial/StopNode 3.16
234 TestMultiNode/serial/StartAfterStop 38.86
235 TestMultiNode/serial/RestartKeepsNodes 318.99
236 TestMultiNode/serial/DeleteNode 2.83
237 TestMultiNode/serial/StopMultiNode 181.87
238 TestMultiNode/serial/RestartMultiNode 103.1
239 TestMultiNode/serial/ValidateNameConflict 46.58
246 TestScheduledStopUnix 115.17
250 TestRunningBinaryUpgrade 157.03
252 TestKubernetesUpgrade 344.27
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 123.73
264 TestNetworkPlugins/group/false 3.42
268 TestStoppedBinaryUpgrade/Setup 2.58
269 TestStoppedBinaryUpgrade/Upgrade 135.42
270 TestNoKubernetes/serial/StartWithStopK8s 59.75
271 TestNoKubernetes/serial/Start 49.71
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
274 TestNoKubernetes/serial/ProfileList 0.85
275 TestNoKubernetes/serial/Stop 1.32
285 TestPause/serial/Start 84.85
286 TestNetworkPlugins/group/auto/Start 94
287 TestNetworkPlugins/group/kindnet/Start 68.17
289 TestNetworkPlugins/group/auto/KubeletFlags 0.22
290 TestNetworkPlugins/group/auto/NetCatPod 10.24
291 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
292 TestNetworkPlugins/group/calico/Start 77.24
293 TestNetworkPlugins/group/auto/DNS 0.15
294 TestNetworkPlugins/group/auto/Localhost 0.12
295 TestNetworkPlugins/group/auto/HairPin 0.14
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
297 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
298 TestNetworkPlugins/group/kindnet/DNS 0.16
299 TestNetworkPlugins/group/kindnet/Localhost 0.13
300 TestNetworkPlugins/group/kindnet/HairPin 0.14
301 TestNetworkPlugins/group/custom-flannel/Start 92.65
302 TestNetworkPlugins/group/enable-default-cni/Start 88.65
303 TestNetworkPlugins/group/flannel/Start 122.93
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/calico/KubeletFlags 0.24
306 TestNetworkPlugins/group/calico/NetCatPod 14.24
307 TestNetworkPlugins/group/calico/DNS 0.16
308 TestNetworkPlugins/group/calico/Localhost 0.15
309 TestNetworkPlugins/group/calico/HairPin 0.13
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.37
312 TestNetworkPlugins/group/bridge/Start 89.74
313 TestNetworkPlugins/group/custom-flannel/DNS 0.14
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
322 TestStartStop/group/old-k8s-version/serial/FirstStart 103.76
324 TestStartStop/group/no-preload/serial/FirstStart 116.24
325 TestNetworkPlugins/group/flannel/ControllerPod 6.01
326 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
327 TestNetworkPlugins/group/flannel/NetCatPod 11.25
328 TestNetworkPlugins/group/flannel/DNS 0.14
329 TestNetworkPlugins/group/flannel/Localhost 0.18
330 TestNetworkPlugins/group/flannel/HairPin 0.12
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
332 TestNetworkPlugins/group/bridge/NetCatPod 11.26
334 TestStartStop/group/embed-certs/serial/FirstStart 88.28
335 TestNetworkPlugins/group/bridge/DNS 0.15
336 TestNetworkPlugins/group/bridge/Localhost 0.13
337 TestNetworkPlugins/group/bridge/HairPin 0.14
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 95.2
340 TestStartStop/group/old-k8s-version/serial/DeployApp 12.39
341 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
342 TestStartStop/group/old-k8s-version/serial/Stop 91.07
343 TestStartStop/group/no-preload/serial/DeployApp 12.29
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
345 TestStartStop/group/no-preload/serial/Stop 90.84
346 TestStartStop/group/embed-certs/serial/DeployApp 10.29
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
348 TestStartStop/group/embed-certs/serial/Stop 91.21
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.65
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
353 TestStartStop/group/old-k8s-version/serial/SecondStart 47.33
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
355 TestStartStop/group/no-preload/serial/SecondStart 60.99
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/embed-certs/serial/SecondStart 50.34
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
360 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
361 TestStartStop/group/old-k8s-version/serial/Pause 3.18
363 TestStartStop/group/newest-cni/serial/FirstStart 50.53
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 67.83
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.07
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
369 TestStartStop/group/no-preload/serial/Pause 3.62
370 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
373 TestStartStop/group/embed-certs/serial/Pause 3.17
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.37
376 TestStartStop/group/newest-cni/serial/Stop 11.35
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
378 TestStartStop/group/newest-cni/serial/SecondStart 36.68
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.76
383 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
386 TestStartStop/group/newest-cni/serial/Pause 2.45
x
+
TestDownloadOnly/v1.28.0/json-events (23.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-925513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-925513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.09575137s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0904 05:53:29.797135 1120074 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0904 05:53:29.797255 1120074 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-925513
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-925513: exit status 85 (62.326188ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-925513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-925513 │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 05:53:06
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 05:53:06.746360 1120086 out.go:360] Setting OutFile to fd 1 ...
	I0904 05:53:06.746678 1120086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 05:53:06.746690 1120086 out.go:374] Setting ErrFile to fd 2...
	I0904 05:53:06.746695 1120086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 05:53:06.746929 1120086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	W0904 05:53:06.747084 1120086 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-1115845/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-1115845/.minikube/config/config.json: no such file or directory
	I0904 05:53:06.747734 1120086 out.go:368] Setting JSON to true
	I0904 05:53:06.748861 1120086 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12930,"bootTime":1756952257,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 05:53:06.748965 1120086 start.go:140] virtualization: kvm guest
	I0904 05:53:06.751115 1120086 out.go:99] [download-only-925513] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0904 05:53:06.751234 1120086 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 05:53:06.751294 1120086 notify.go:220] Checking for updates...
	I0904 05:53:06.752483 1120086 out.go:171] MINIKUBE_LOCATION=21409
	I0904 05:53:06.753775 1120086 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 05:53:06.754992 1120086 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 05:53:06.756023 1120086 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 05:53:06.757054 1120086 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 05:53:06.758944 1120086 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 05:53:06.759230 1120086 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 05:53:06.791162 1120086 out.go:99] Using the kvm2 driver based on user configuration
	I0904 05:53:06.791195 1120086 start.go:304] selected driver: kvm2
	I0904 05:53:06.791202 1120086 start.go:918] validating driver "kvm2" against <nil>
	I0904 05:53:06.791521 1120086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 05:53:06.791596 1120086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1115845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0904 05:53:06.796274 1120086 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0904 05:53:06.797474 1120086 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0904 05:53:06.797569 1120086 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 05:53:07.552356 1120086 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 05:53:07.552998 1120086 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0904 05:53:07.553148 1120086 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 05:53:07.553180 1120086 cni.go:84] Creating CNI manager for ""
	I0904 05:53:07.553227 1120086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 05:53:07.553237 1120086 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 05:53:07.553295 1120086 start.go:348] cluster config:
	{Name:download-only-925513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-925513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 05:53:07.553459 1120086 iso.go:125] acquiring lock: {Name:mk8046b526ef8e07e7f8bc343ab464442f664799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 05:53:07.555466 1120086 out.go:99] Downloading VM boot image ...
	I0904 05:53:07.555501 1120086 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/iso/amd64/minikube-v1.36.0-1756846819-21409-amd64.iso
	I0904 05:53:17.262807 1120086 out.go:99] Starting "download-only-925513" primary control-plane node in "download-only-925513" cluster
	I0904 05:53:17.262847 1120086 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0904 05:53:17.362866 1120086 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0904 05:53:17.362902 1120086 cache.go:58] Caching tarball of preloaded images
	I0904 05:53:17.363087 1120086 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0904 05:53:17.364756 1120086 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0904 05:53:17.364772 1120086 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 05:53:17.466674 1120086 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-925513 host does not exist
	  To start a cluster, run: "minikube start -p download-only-925513"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-925513
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (13.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-515248 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-515248 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.071683881s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (13.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0904 05:53:43.204617 1120074 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0904 05:53:43.204666 1120074 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-515248
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-515248: exit status 85 (98.327711ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-925513 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-925513 │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │ 04 Sep 25 05:53 UTC │
	│ delete  │ -p download-only-925513                                                                                                                                                 │ download-only-925513 │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │ 04 Sep 25 05:53 UTC │
	│ start   │ -o=json --download-only -p download-only-515248 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-515248 │ jenkins │ v1.36.0 │ 04 Sep 25 05:53 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 05:53:30
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 05:53:30.174457 1120315 out.go:360] Setting OutFile to fd 1 ...
	I0904 05:53:30.174759 1120315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 05:53:30.174770 1120315 out.go:374] Setting ErrFile to fd 2...
	I0904 05:53:30.174777 1120315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 05:53:30.175015 1120315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 05:53:30.175633 1120315 out.go:368] Setting JSON to true
	I0904 05:53:30.176629 1120315 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":12953,"bootTime":1756952257,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 05:53:30.176738 1120315 start.go:140] virtualization: kvm guest
	I0904 05:53:30.178575 1120315 out.go:99] [download-only-515248] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 05:53:30.178735 1120315 notify.go:220] Checking for updates...
	I0904 05:53:30.180057 1120315 out.go:171] MINIKUBE_LOCATION=21409
	I0904 05:53:30.181331 1120315 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 05:53:30.182398 1120315 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 05:53:30.183585 1120315 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 05:53:30.185051 1120315 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 05:53:30.187211 1120315 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 05:53:30.187430 1120315 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 05:53:30.220119 1120315 out.go:99] Using the kvm2 driver based on user configuration
	I0904 05:53:30.220153 1120315 start.go:304] selected driver: kvm2
	I0904 05:53:30.220159 1120315 start.go:918] validating driver "kvm2" against <nil>
	I0904 05:53:30.220478 1120315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 05:53:30.220569 1120315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-1115845/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 05:53:30.236034 1120315 install.go:137] /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0904 05:53:30.236096 1120315 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0904 05:53:30.236633 1120315 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0904 05:53:30.236783 1120315 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 05:53:30.236812 1120315 cni.go:84] Creating CNI manager for ""
	I0904 05:53:30.236882 1120315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0904 05:53:30.236892 1120315 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 05:53:30.236948 1120315 start.go:348] cluster config:
	{Name:download-only-515248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-515248 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 05:53:30.237032 1120315 iso.go:125] acquiring lock: {Name:mk8046b526ef8e07e7f8bc343ab464442f664799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 05:53:30.238459 1120315 out.go:99] Starting "download-only-515248" primary control-plane node in "download-only-515248" cluster
	I0904 05:53:30.238473 1120315 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 05:53:30.674123 1120315 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0904 05:53:30.674152 1120315 cache.go:58] Caching tarball of preloaded images
	I0904 05:53:30.674300 1120315 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0904 05:53:30.676077 1120315 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0904 05:53:30.676093 1120315 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0904 05:53:31.146185 1120315 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21409-1115845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-515248 host does not exist
	  To start a cluster, run: "minikube start -p download-only-515248"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-515248
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0904 05:53:43.852517 1120074 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-795770 --alsologtostderr --binary-mirror http://127.0.0.1:37793 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-795770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-795770
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (56.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-180216 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-180216 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (55.96470381s)
helpers_test.go:175: Cleaning up "offline-crio-180216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-180216
--- PASS: TestOffline (56.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-691233
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-691233: exit status 85 (55.275187ms)

                                                
                                                
-- stdout --
	* Profile "addons-691233" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-691233"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-691233
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-691233: exit status 85 (54.488813ms)

                                                
                                                
-- stdout --
	* Profile "addons-691233" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-691233"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (199.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-691233 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-691233 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m19.219577906s)
--- PASS: TestAddons/Setup (199.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-691233 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-691233 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-691233 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-691233 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1bb14d69-8ad2-4f5c-b13c-a5c8433f0de8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1bb14d69-8ad2-4f5c-b13c-a5c8433f0de8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003682117s
addons_test.go:694: (dbg) Run:  kubectl --context addons-691233 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-691233 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-691233 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.320616ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-582kf" [fa33c11f-067f-4e95-aa92-9973bc0df7da] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006921247s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5tk68" [58650695-54df-4940-a8c6-50e3ad46a596] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004107588s
addons_test.go:392: (dbg) Run:  kubectl --context addons-691233 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-691233 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-691233 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.499997734s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 ip
2025/09/04 05:57:40 [DEBUG] GET http://192.168.39.193:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.32s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.828675ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-691233
addons_test.go:332: (dbg) Run:  kubectl --context addons-691233 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8gqn6" [6e19b2fd-f636-4888-8f45-b23455e6f029] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004898339s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 10.257584ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-zsdw6" [4d80e196-44de-49cc-a421-dc451907e628] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006140542s
addons_test.go:463: (dbg) Run:  kubectl --context addons-691233 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-691233 addons disable metrics-server --alsologtostderr -v=1: (1.181285445s)
--- PASS: TestAddons/parallel/MetricsServer (6.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0904 05:57:23.100683 1120074 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0904 05:57:23.106642 1120074 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0904 05:57:23.106663 1120074 kapi.go:107] duration metric: took 6.009ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.017918ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-691233 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-691233 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b4b2b6b8-fac7-41ec-93cf-536624f108bf] Pending
helpers_test.go:352: "task-pv-pod" [b4b2b6b8-fac7-41ec-93cf-536624f108bf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b4b2b6b8-fac7-41ec-93cf-536624f108bf] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.00391499s
addons_test.go:572: (dbg) Run:  kubectl --context addons-691233 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-691233 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-691233 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-691233 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-691233 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-691233 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-691233 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ddfc68f9-b817-45f5-acca-d9fcf5cd2b67] Pending
helpers_test.go:352: "task-pv-pod-restore" [ddfc68f9-b817-45f5-acca-d9fcf5cd2b67] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ddfc68f9-b817-45f5-acca-d9fcf5cd2b67] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004187055s
addons_test.go:614: (dbg) Run:  kubectl --context addons-691233 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-691233 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-691233 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-691233 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.784726666s)
--- PASS: TestAddons/parallel/CSI (60.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-691233 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-cjc4z" [d13257cb-c2d8-460b-af34-06338eab31e9] Pending
helpers_test.go:352: "headlamp-6f46646d79-cjc4z" [d13257cb-c2d8-460b-af34-06338eab31e9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-cjc4z" [d13257cb-c2d8-460b-af34-06338eab31e9] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005638689s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-691233 addons disable headlamp --alsologtostderr -v=1: (6.084703879s)
--- PASS: TestAddons/parallel/Headlamp (18.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-qbt8z" [db6f35d8-a24f-4caf-b56b-abfdb6f37409] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.007499791s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (16.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-691233 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-691233 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [7fd3ef89-6e61-428f-908e-4095b77f76e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [7fd3ef89-6e61-428f-908e-4095b77f76e6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [7fd3ef89-6e61-428f-908e-4095b77f76e6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.004446642s
addons_test.go:967: (dbg) Run:  kubectl --context addons-691233 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 ssh "cat /opt/local-path-provisioner/pvc-e010504f-7da0-4a3a-8765-f897fccbcf3a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-691233 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-691233 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (16.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7zlkd" [d422839b-7e87-41d0-b5fa-45d2eb76881d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004022729s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.72s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-frrhx" [5aebbf38-2858-4653-bac4-d45ce042ca14] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.242994996s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-691233 addons disable yakd --alsologtostderr -v=1: (6.052690904s)
--- PASS: TestAddons/parallel/Yakd (11.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-691233
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-691233: (1m30.977832877s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-691233
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-691233
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-691233
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (62.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-153188 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-153188 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m0.800406717s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-153188 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-153188 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-153188 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-153188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-153188
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-153188: (1.044249749s)
--- PASS: TestCertOptions (62.33s)

                                                
                                    
x
+
TestCertExpiration (278.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-986529 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-986529 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.835571262s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-986529 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-986529 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.160447941s)
helpers_test.go:175: Cleaning up "cert-expiration-986529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-986529
--- PASS: TestCertExpiration (278.82s)

                                                
                                    
x
+
TestForceSystemdFlag (87.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-969000 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-969000 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m26.232020665s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-969000 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-969000
--- PASS: TestForceSystemdFlag (87.31s)

                                                
                                    
x
+
TestForceSystemdEnv (77.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-199272 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-199272 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.211358703s)
helpers_test.go:175: Cleaning up "force-systemd-env-199272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-199272
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-199272: (1.056937393s)
--- PASS: TestForceSystemdEnv (77.27s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0904 06:55:49.748220 1120074 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 06:55:49.748418 1120074 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0904 06:55:49.780267 1120074 install.go:62] docker-machine-driver-kvm2: exit status 1
W0904 06:55:49.780571 1120074 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 06:55:49.780661 1120074 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate26335514/001/docker-machine-driver-kvm2
I0904 06:55:50.039176 1120074 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate26335514/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0006c1870 gz:0xc0006c1878 tar:0xc0006c1820 tar.bz2:0xc0006c1830 tar.gz:0xc0006c1840 tar.xz:0xc0006c1850 tar.zst:0xc0006c1860 tbz2:0xc0006c1830 tgz:0xc0006c1840 txz:0xc0006c1850 tzst:0xc0006c1860 xz:0xc0006c1880 zip:0xc0006c1890 zst:0xc0006c1888] Getters:map[file:0xc0016f48e0 http:0xc0008b4d70 https:0xc0008b4f00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0904 06:55:50.039223 1120074 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate26335514/001/docker-machine-driver-kvm2
I0904 06:55:51.199076 1120074 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0904 06:55:51.199260 1120074 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0904 06:55:51.231280 1120074 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0904 06:55:51.231312 1120074 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0904 06:55:51.231380 1120074 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0904 06:55:51.231407 1120074 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate26335514/002/docker-machine-driver-kvm2
I0904 06:55:51.260696 1120074 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate26335514/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc0006c1870 gz:0xc0006c1878 tar:0xc0006c1820 tar.bz2:0xc0006c1830 tar.gz:0xc0006c1840 tar.xz:0xc0006c1850 tar.zst:0xc0006c1860 tbz2:0xc0006c1830 tgz:0xc0006c1840 txz:0xc0006c1850 tzst:0xc0006c1860 xz:0xc0006c1880 zip:0xc0006c1890 zst:0xc0006c1888] Getters:map[file:0xc002020f60 http:0xc00205cd20 https:0xc00205cd70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0904 06:55:51.260740 1120074 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate26335514/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.91s)

                                                
                                    
x
+
TestErrorSpam/setup (42.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-654486 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-654486 --driver=kvm2  --container-runtime=crio
E0904 06:02:04.417465 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:04.423918 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:04.435334 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:04.456803 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:04.498260 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:04.579778 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:04.741368 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:05.063083 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:05.705249 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:06.986895 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:09.549882 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:14.671431 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:24.912942 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:02:45.395089 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-654486 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-654486 --driver=kvm2  --container-runtime=crio: (42.926038654s)
--- PASS: TestErrorSpam/setup (42.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (94.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 stop
E0904 06:03:26.357748 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 stop: (1m31.019297278s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 stop: (1.830339421s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-654486 --log_dir /tmp/nospam-654486 stop: (1.454199695s)
--- PASS: TestErrorSpam/stop (94.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-1115845/.minikube/files/etc/test/nested/copy/1120074/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-968890 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0904 06:04:48.282906 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-968890 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m27.986403006s)
--- PASS: TestFunctional/serial/StartWithProxy (87.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0904 06:05:54.555401 1120074 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-968890 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-968890 --alsologtostderr -v=8: (36.916226991s)
functional_test.go:678: soft start took 36.916925378s for "functional-968890" cluster.
I0904 06:06:31.471971 1120074 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (36.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-968890 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 cache add registry.k8s.io/pause:3.1: (1.079726559s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 cache add registry.k8s.io/pause:3.3: (1.181676735s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 cache add registry.k8s.io/pause:latest: (1.120111327s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-968890 /tmp/TestFunctionalserialCacheCmdcacheadd_local642075651/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cache add minikube-local-cache-test:functional-968890
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 cache add minikube-local-cache-test:functional-968890: (1.804999329s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cache delete minikube-local-cache-test:functional-968890
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-968890
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.883594ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 cache reload: (1.000898611s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 kubectl -- --context functional-968890 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-968890 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-968890 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0904 06:07:04.417197 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-968890 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.987361798s)
functional_test.go:776: restart took 38.987483731s for "functional-968890" cluster.
I0904 06:07:18.422007 1120074 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (38.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-968890 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 logs: (1.464657781s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 logs --file /tmp/TestFunctionalserialLogsFileCmd1001477412/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 logs --file /tmp/TestFunctionalserialLogsFileCmd1001477412/001/logs.txt: (1.415329834s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-968890 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-968890
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-968890: exit status 115 (283.217095ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.194:31506 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-968890 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 config get cpus: exit status 14 (69.118913ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 config get cpus: exit status 14 (52.938144ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-968890 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-968890 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1129264: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-968890 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-968890 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.706412ms)

                                                
                                                
-- stdout --
	* [functional-968890] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:07:29.252922 1128232 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:07:29.253206 1128232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:07:29.253217 1128232 out.go:374] Setting ErrFile to fd 2...
	I0904 06:07:29.253222 1128232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:07:29.253415 1128232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:07:29.253940 1128232 out.go:368] Setting JSON to false
	I0904 06:07:29.255097 1128232 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":13792,"bootTime":1756952257,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:07:29.255200 1128232 start.go:140] virtualization: kvm guest
	I0904 06:07:29.256905 1128232 out.go:179] * [functional-968890] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:07:29.258048 1128232 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:07:29.258067 1128232 notify.go:220] Checking for updates...
	I0904 06:07:29.260163 1128232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:07:29.261216 1128232 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 06:07:29.262214 1128232 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 06:07:29.263338 1128232 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:07:29.264416 1128232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:07:29.266009 1128232 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:07:29.266475 1128232 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:07:29.266568 1128232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:07:29.285445 1128232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0904 06:07:29.285905 1128232 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:07:29.286479 1128232 main.go:141] libmachine: Using API Version  1
	I0904 06:07:29.286500 1128232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:07:29.286926 1128232 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:07:29.287152 1128232 main.go:141] libmachine: (functional-968890) Calling .DriverName
	I0904 06:07:29.287445 1128232 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:07:29.287832 1128232 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:07:29.287905 1128232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:07:29.307660 1128232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0904 06:07:29.308133 1128232 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:07:29.308585 1128232 main.go:141] libmachine: Using API Version  1
	I0904 06:07:29.308606 1128232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:07:29.309169 1128232 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:07:29.309328 1128232 main.go:141] libmachine: (functional-968890) Calling .DriverName
	I0904 06:07:29.351456 1128232 out.go:179] * Using the kvm2 driver based on existing profile
	I0904 06:07:29.352396 1128232 start.go:304] selected driver: kvm2
	I0904 06:07:29.352412 1128232 start.go:918] validating driver "kvm2" against &{Name:functional-968890 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-968890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:07:29.352566 1128232 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:07:29.356277 1128232 out.go:203] 
	W0904 06:07:29.358418 1128232 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 06:07:29.359900 1128232 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-968890 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-968890 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-968890 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (170.241488ms)

                                                
                                                
-- stdout --
	* [functional-968890] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:07:28.174085 1127939 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:07:28.174195 1127939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:07:28.174235 1127939 out.go:374] Setting ErrFile to fd 2...
	I0904 06:07:28.174243 1127939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:07:28.174559 1127939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:07:28.175091 1127939 out.go:368] Setting JSON to false
	I0904 06:07:28.176182 1127939 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":13791,"bootTime":1756952257,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:07:28.176239 1127939 start.go:140] virtualization: kvm guest
	I0904 06:07:28.178351 1127939 out.go:179] * [functional-968890] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 06:07:28.179738 1127939 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:07:28.179787 1127939 notify.go:220] Checking for updates...
	I0904 06:07:28.181791 1127939 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:07:28.182987 1127939 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 06:07:28.184077 1127939 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 06:07:28.185122 1127939 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:07:28.186167 1127939 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:07:28.187866 1127939 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:07:28.188410 1127939 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:07:28.188524 1127939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:07:28.210289 1127939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0904 06:07:28.211146 1127939 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:07:28.211927 1127939 main.go:141] libmachine: Using API Version  1
	I0904 06:07:28.211954 1127939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:07:28.212393 1127939 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:07:28.212629 1127939 main.go:141] libmachine: (functional-968890) Calling .DriverName
	I0904 06:07:28.212972 1127939 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:07:28.213421 1127939 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:07:28.213474 1127939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:07:28.231684 1127939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0904 06:07:28.232250 1127939 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:07:28.232782 1127939 main.go:141] libmachine: Using API Version  1
	I0904 06:07:28.232807 1127939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:07:28.233262 1127939 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:07:28.233477 1127939 main.go:141] libmachine: (functional-968890) Calling .DriverName
	I0904 06:07:28.267963 1127939 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0904 06:07:28.268977 1127939 start.go:304] selected driver: kvm2
	I0904 06:07:28.268995 1127939 start.go:918] validating driver "kvm2" against &{Name:functional-968890 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.36.0-1756846819-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756936034-21409@sha256:06a2e6835062e5beff0e5288aa7d453ae87f4ed9d9f593dbbe436c8e34741bfc Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-968890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 06:07:28.269150 1127939 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:07:28.271075 1127939 out.go:203] 
	W0904 06:07:28.271978 1127939 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 06:07:28.272942 1127939 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-968890 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-968890 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hdqg6" [48251895-5f61-48a2-995b-6433612a6a8f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-hdqg6" [48251895-5f61-48a2-995b-6433612a6a8f] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.004011607s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.194:32094
functional_test.go:1680: http://192.168.39.194:32094: success! body:
Request served by hello-node-connect-7d85dfc575-hdqg6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.194:32094
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d381ebc1-1c47-4e8e-9fb4-d58dddfdc54e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006003801s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-968890 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-968890 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-968890 get pvc myclaim -o=json
I0904 06:07:34.965889 1120074 retry.go:31] will retry after 2.372876987s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:af11cf9a-7363-4e99-b819-37cd4b696df6 ResourceVersion:750 Generation:0 CreationTimestamp:2025-09-04 06:07:34 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-af11cf9a-7363-4e99-b819-37cd4b696df6 StorageClassName:0xc0019a5bc0 VolumeMode:0xc0019a5bd0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-968890 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-968890 apply -f testdata/storage-provisioner/pod.yaml
I0904 06:07:38.083021 1120074 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ef41e814-86bf-44bf-973f-1f1a4250dd7b] Pending
helpers_test.go:352: "sp-pod" [ef41e814-86bf-44bf-973f-1f1a4250dd7b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ef41e814-86bf-44bf-973f-1f1a4250dd7b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.00705641s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-968890 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-968890 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-968890 delete -f testdata/storage-provisioner/pod.yaml: (2.557590965s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-968890 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [956b3775-5c87-4807-bbea-274165a86c24] Pending
helpers_test.go:352: "sp-pod" [956b3775-5c87-4807-bbea-274165a86c24] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [956b3775-5c87-4807-bbea-274165a86c24] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004947401s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-968890 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh -n functional-968890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cp functional-968890:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4275090097/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh -n functional-968890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh -n functional-968890 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-968890 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pqfwm" [690871a7-8428-4342-bb03-b22f20b38f27] Pending
helpers_test.go:352: "mysql-5bb876957f-pqfwm" [690871a7-8428-4342-bb03-b22f20b38f27] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-pqfwm" [690871a7-8428-4342-bb03-b22f20b38f27] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.015876308s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-968890 exec mysql-5bb876957f-pqfwm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-968890 exec mysql-5bb876957f-pqfwm -- mysql -ppassword -e "show databases;": exit status 1 (546.707748ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 06:07:46.090968 1120074 retry.go:31] will retry after 922.541519ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-968890 exec mysql-5bb876957f-pqfwm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-968890 exec mysql-5bb876957f-pqfwm -- mysql -ppassword -e "show databases;": exit status 1 (175.223636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0904 06:07:47.189632 1120074 retry.go:31] will retry after 1.442094588s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-968890 exec mysql-5bb876957f-pqfwm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1120074/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo cat /etc/test/nested/copy/1120074/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1120074.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo cat /etc/ssl/certs/1120074.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1120074.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo cat /usr/share/ca-certificates/1120074.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11200742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo cat /etc/ssl/certs/11200742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11200742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo cat /usr/share/ca-certificates/11200742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-968890 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh "sudo systemctl is-active docker": exit status 1 (247.204119ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh "sudo systemctl is-active containerd": exit status 1 (230.466332ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-968890 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-968890
localhost/kicbase/echo-server:functional-968890
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-968890 image ls --format short --alsologtostderr:
I0904 06:07:59.765190 1129490 out.go:360] Setting OutFile to fd 1 ...
I0904 06:07:59.765497 1129490 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:07:59.765508 1129490 out.go:374] Setting ErrFile to fd 2...
I0904 06:07:59.765514 1129490 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:07:59.765706 1129490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
I0904 06:07:59.766315 1129490 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:07:59.766478 1129490 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:07:59.766929 1129490 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:07:59.767014 1129490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:07:59.782272 1129490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42845
I0904 06:07:59.782736 1129490 main.go:141] libmachine: () Calling .GetVersion
I0904 06:07:59.783280 1129490 main.go:141] libmachine: Using API Version  1
I0904 06:07:59.783306 1129490 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:07:59.783678 1129490 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:07:59.783864 1129490 main.go:141] libmachine: (functional-968890) Calling .GetState
I0904 06:07:59.785550 1129490 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:07:59.785594 1129490 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:07:59.800301 1129490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32965
I0904 06:07:59.800802 1129490 main.go:141] libmachine: () Calling .GetVersion
I0904 06:07:59.801247 1129490 main.go:141] libmachine: Using API Version  1
I0904 06:07:59.801268 1129490 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:07:59.801615 1129490 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:07:59.801823 1129490 main.go:141] libmachine: (functional-968890) Calling .DriverName
I0904 06:07:59.802029 1129490 ssh_runner.go:195] Run: systemctl --version
I0904 06:07:59.802060 1129490 main.go:141] libmachine: (functional-968890) Calling .GetSSHHostname
I0904 06:07:59.804627 1129490 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:07:59.805140 1129490 main.go:141] libmachine: (functional-968890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:81:49", ip: ""} in network mk-functional-968890: {Iface:virbr1 ExpiryTime:2025-09-04 07:04:41 +0000 UTC Type:0 Mac:52:54:00:c6:81:49 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:functional-968890 Clientid:01:52:54:00:c6:81:49}
I0904 06:07:59.805169 1129490 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined IP address 192.168.39.194 and MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:07:59.805297 1129490 main.go:141] libmachine: (functional-968890) Calling .GetSSHPort
I0904 06:07:59.805451 1129490 main.go:141] libmachine: (functional-968890) Calling .GetSSHKeyPath
I0904 06:07:59.805581 1129490 main.go:141] libmachine: (functional-968890) Calling .GetSSHUsername
I0904 06:07:59.805764 1129490 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/functional-968890/id_rsa Username:docker}
I0904 06:07:59.886275 1129490 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 06:07:59.925572 1129490 main.go:141] libmachine: Making call to close driver server
I0904 06:07:59.925589 1129490 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:07:59.925932 1129490 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:07:59.925956 1129490 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 06:07:59.925966 1129490 main.go:141] libmachine: Making call to close driver server
I0904 06:07:59.925972 1129490 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:07:59.925978 1129490 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
I0904 06:07:59.926217 1129490 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
I0904 06:07:59.926311 1129490 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:07:59.926367 1129490 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-968890 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-968890  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ localhost/minikube-local-cache-test     │ functional-968890  │ 5804d1079ee51 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-968890 image ls --format table --alsologtostderr:
I0904 06:08:00.207291 1129538 out.go:360] Setting OutFile to fd 1 ...
I0904 06:08:00.207558 1129538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:08:00.207570 1129538 out.go:374] Setting ErrFile to fd 2...
I0904 06:08:00.207577 1129538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:08:00.207777 1129538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
I0904 06:08:00.208362 1129538 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:08:00.208477 1129538 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:08:00.208868 1129538 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:08:00.208950 1129538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:08:00.224586 1129538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34101
I0904 06:08:00.225163 1129538 main.go:141] libmachine: () Calling .GetVersion
I0904 06:08:00.225815 1129538 main.go:141] libmachine: Using API Version  1
I0904 06:08:00.225839 1129538 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:08:00.226251 1129538 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:08:00.226462 1129538 main.go:141] libmachine: (functional-968890) Calling .GetState
I0904 06:08:00.228341 1129538 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:08:00.228379 1129538 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:08:00.245802 1129538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41677
I0904 06:08:00.246190 1129538 main.go:141] libmachine: () Calling .GetVersion
I0904 06:08:00.246700 1129538 main.go:141] libmachine: Using API Version  1
I0904 06:08:00.246725 1129538 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:08:00.247143 1129538 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:08:00.247393 1129538 main.go:141] libmachine: (functional-968890) Calling .DriverName
I0904 06:08:00.247585 1129538 ssh_runner.go:195] Run: systemctl --version
I0904 06:08:00.247611 1129538 main.go:141] libmachine: (functional-968890) Calling .GetSSHHostname
I0904 06:08:00.250300 1129538 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.250678 1129538 main.go:141] libmachine: (functional-968890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:81:49", ip: ""} in network mk-functional-968890: {Iface:virbr1 ExpiryTime:2025-09-04 07:04:41 +0000 UTC Type:0 Mac:52:54:00:c6:81:49 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:functional-968890 Clientid:01:52:54:00:c6:81:49}
I0904 06:08:00.250710 1129538 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined IP address 192.168.39.194 and MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.250796 1129538 main.go:141] libmachine: (functional-968890) Calling .GetSSHPort
I0904 06:08:00.250985 1129538 main.go:141] libmachine: (functional-968890) Calling .GetSSHKeyPath
I0904 06:08:00.251118 1129538 main.go:141] libmachine: (functional-968890) Calling .GetSSHUsername
I0904 06:08:00.251266 1129538 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/functional-968890/id_rsa Username:docker}
I0904 06:08:00.342327 1129538 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 06:08:00.408597 1129538 main.go:141] libmachine: Making call to close driver server
I0904 06:08:00.408619 1129538 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:00.408928 1129538 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:00.408936 1129538 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
I0904 06:08:00.408967 1129538 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 06:08:00.408979 1129538 main.go:141] libmachine: Making call to close driver server
I0904 06:08:00.408987 1129538 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:00.409214 1129538 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:00.409226 1129538 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 06:08:00.409250 1129538 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-968890 image ls --format json --alsologtostderr:
[{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"]
,"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5804d1079ee5130268f2557a7309c19f7abbec69bcf5a96f2d8b557b452a45e7","repoDigests":["localhost/minikube-local-cache-test@sha256:01fea54472f23aec3e49370252fc0588285b209a5584c7087c74d7c5ec516921"],"repoTags":["localhost/minikube-local-cache-test:functional-968890"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a
06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-968890"],"size":"4943877"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause
@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker
.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],
"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-968890 image ls --format json --alsologtostderr:
I0904 06:07:59.979768 1129514 out.go:360] Setting OutFile to fd 1 ...
I0904 06:07:59.980041 1129514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:07:59.980050 1129514 out.go:374] Setting ErrFile to fd 2...
I0904 06:07:59.980053 1129514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:07:59.980241 1129514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
I0904 06:07:59.980779 1129514 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:07:59.980873 1129514 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:07:59.981191 1129514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:07:59.981246 1129514 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:07:59.996555 1129514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35711
I0904 06:07:59.997029 1129514 main.go:141] libmachine: () Calling .GetVersion
I0904 06:07:59.997599 1129514 main.go:141] libmachine: Using API Version  1
I0904 06:07:59.997624 1129514 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:07:59.997977 1129514 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:07:59.998196 1129514 main.go:141] libmachine: (functional-968890) Calling .GetState
I0904 06:08:00.000268 1129514 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:08:00.000339 1129514 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:08:00.016682 1129514 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
I0904 06:08:00.017243 1129514 main.go:141] libmachine: () Calling .GetVersion
I0904 06:08:00.017761 1129514 main.go:141] libmachine: Using API Version  1
I0904 06:08:00.017792 1129514 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:08:00.018113 1129514 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:08:00.018326 1129514 main.go:141] libmachine: (functional-968890) Calling .DriverName
I0904 06:08:00.018544 1129514 ssh_runner.go:195] Run: systemctl --version
I0904 06:08:00.018576 1129514 main.go:141] libmachine: (functional-968890) Calling .GetSSHHostname
I0904 06:08:00.021037 1129514 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.021478 1129514 main.go:141] libmachine: (functional-968890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:81:49", ip: ""} in network mk-functional-968890: {Iface:virbr1 ExpiryTime:2025-09-04 07:04:41 +0000 UTC Type:0 Mac:52:54:00:c6:81:49 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:functional-968890 Clientid:01:52:54:00:c6:81:49}
I0904 06:08:00.021515 1129514 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined IP address 192.168.39.194 and MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.021689 1129514 main.go:141] libmachine: (functional-968890) Calling .GetSSHPort
I0904 06:08:00.021865 1129514 main.go:141] libmachine: (functional-968890) Calling .GetSSHKeyPath
I0904 06:08:00.022056 1129514 main.go:141] libmachine: (functional-968890) Calling .GetSSHUsername
I0904 06:08:00.022205 1129514 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/functional-968890/id_rsa Username:docker}
I0904 06:08:00.103275 1129514 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 06:08:00.155578 1129514 main.go:141] libmachine: Making call to close driver server
I0904 06:08:00.155591 1129514 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:00.155907 1129514 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:00.155949 1129514 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
I0904 06:08:00.155974 1129514 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 06:08:00.155991 1129514 main.go:141] libmachine: Making call to close driver server
I0904 06:08:00.156004 1129514 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:00.156281 1129514 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:00.156308 1129514 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 06:08:00.156307 1129514 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-968890 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-968890
size: "4943877"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 5804d1079ee5130268f2557a7309c19f7abbec69bcf5a96f2d8b557b452a45e7
repoDigests:
- localhost/minikube-local-cache-test@sha256:01fea54472f23aec3e49370252fc0588285b209a5584c7087c74d7c5ec516921
repoTags:
- localhost/minikube-local-cache-test:functional-968890
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-968890 image ls --format yaml --alsologtostderr:
I0904 06:08:00.461859 1129586 out.go:360] Setting OutFile to fd 1 ...
I0904 06:08:00.462115 1129586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:08:00.462125 1129586 out.go:374] Setting ErrFile to fd 2...
I0904 06:08:00.462129 1129586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:08:00.462392 1129586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
I0904 06:08:00.463093 1129586 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:08:00.463201 1129586 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:08:00.463605 1129586 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:08:00.463680 1129586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:08:00.480639 1129586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
I0904 06:08:00.481130 1129586 main.go:141] libmachine: () Calling .GetVersion
I0904 06:08:00.481771 1129586 main.go:141] libmachine: Using API Version  1
I0904 06:08:00.481791 1129586 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:08:00.482186 1129586 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:08:00.482448 1129586 main.go:141] libmachine: (functional-968890) Calling .GetState
I0904 06:08:00.484739 1129586 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:08:00.484798 1129586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:08:00.500644 1129586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39495
I0904 06:08:00.501168 1129586 main.go:141] libmachine: () Calling .GetVersion
I0904 06:08:00.501746 1129586 main.go:141] libmachine: Using API Version  1
I0904 06:08:00.501776 1129586 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:08:00.502263 1129586 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:08:00.502463 1129586 main.go:141] libmachine: (functional-968890) Calling .DriverName
I0904 06:08:00.502729 1129586 ssh_runner.go:195] Run: systemctl --version
I0904 06:08:00.502765 1129586 main.go:141] libmachine: (functional-968890) Calling .GetSSHHostname
I0904 06:08:00.505745 1129586 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.506184 1129586 main.go:141] libmachine: (functional-968890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:81:49", ip: ""} in network mk-functional-968890: {Iface:virbr1 ExpiryTime:2025-09-04 07:04:41 +0000 UTC Type:0 Mac:52:54:00:c6:81:49 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:functional-968890 Clientid:01:52:54:00:c6:81:49}
I0904 06:08:00.506222 1129586 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined IP address 192.168.39.194 and MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.506419 1129586 main.go:141] libmachine: (functional-968890) Calling .GetSSHPort
I0904 06:08:00.506594 1129586 main.go:141] libmachine: (functional-968890) Calling .GetSSHKeyPath
I0904 06:08:00.506750 1129586 main.go:141] libmachine: (functional-968890) Calling .GetSSHUsername
I0904 06:08:00.506952 1129586 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/functional-968890/id_rsa Username:docker}
I0904 06:08:00.611016 1129586 ssh_runner.go:195] Run: sudo crictl images --output json
I0904 06:08:00.680834 1129586 main.go:141] libmachine: Making call to close driver server
I0904 06:08:00.680854 1129586 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:00.681153 1129586 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:00.681175 1129586 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 06:08:00.681184 1129586 main.go:141] libmachine: Making call to close driver server
I0904 06:08:00.681192 1129586 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:00.681193 1129586 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
I0904 06:08:00.681476 1129586 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:00.681497 1129586 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh pgrep buildkitd: exit status 1 (212.106408ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image build -t localhost/my-image:functional-968890 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 image build -t localhost/my-image:functional-968890 testdata/build --alsologtostderr: (3.283130824s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-968890 image build -t localhost/my-image:functional-968890 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8f3ba3900bf
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-968890
--> b6cef13ad3b
Successfully tagged localhost/my-image:functional-968890
b6cef13ad3b7305f0fe6f1b9577b5bfcf1801db09a6eece42b7e73aaa51febdd
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-968890 image build -t localhost/my-image:functional-968890 testdata/build --alsologtostderr:
I0904 06:08:00.947673 1129646 out.go:360] Setting OutFile to fd 1 ...
I0904 06:08:00.947800 1129646 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:08:00.947809 1129646 out.go:374] Setting ErrFile to fd 2...
I0904 06:08:00.947813 1129646 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0904 06:08:00.948080 1129646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
I0904 06:08:00.948693 1129646 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:08:00.949489 1129646 config.go:182] Loaded profile config "functional-968890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0904 06:08:00.949839 1129646 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:08:00.949884 1129646 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:08:00.965695 1129646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42873
I0904 06:08:00.966203 1129646 main.go:141] libmachine: () Calling .GetVersion
I0904 06:08:00.966781 1129646 main.go:141] libmachine: Using API Version  1
I0904 06:08:00.966809 1129646 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:08:00.967190 1129646 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:08:00.967393 1129646 main.go:141] libmachine: (functional-968890) Calling .GetState
I0904 06:08:00.969255 1129646 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
I0904 06:08:00.969309 1129646 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 06:08:00.984778 1129646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
I0904 06:08:00.985352 1129646 main.go:141] libmachine: () Calling .GetVersion
I0904 06:08:00.985849 1129646 main.go:141] libmachine: Using API Version  1
I0904 06:08:00.985871 1129646 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 06:08:00.986228 1129646 main.go:141] libmachine: () Calling .GetMachineName
I0904 06:08:00.986436 1129646 main.go:141] libmachine: (functional-968890) Calling .DriverName
I0904 06:08:00.986680 1129646 ssh_runner.go:195] Run: systemctl --version
I0904 06:08:00.986711 1129646 main.go:141] libmachine: (functional-968890) Calling .GetSSHHostname
I0904 06:08:00.989523 1129646 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.989942 1129646 main.go:141] libmachine: (functional-968890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:81:49", ip: ""} in network mk-functional-968890: {Iface:virbr1 ExpiryTime:2025-09-04 07:04:41 +0000 UTC Type:0 Mac:52:54:00:c6:81:49 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:functional-968890 Clientid:01:52:54:00:c6:81:49}
I0904 06:08:00.989979 1129646 main.go:141] libmachine: (functional-968890) DBG | domain functional-968890 has defined IP address 192.168.39.194 and MAC address 52:54:00:c6:81:49 in network mk-functional-968890
I0904 06:08:00.990113 1129646 main.go:141] libmachine: (functional-968890) Calling .GetSSHPort
I0904 06:08:00.990274 1129646 main.go:141] libmachine: (functional-968890) Calling .GetSSHKeyPath
I0904 06:08:00.990457 1129646 main.go:141] libmachine: (functional-968890) Calling .GetSSHUsername
I0904 06:08:00.990587 1129646 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/functional-968890/id_rsa Username:docker}
I0904 06:08:01.080501 1129646 build_images.go:161] Building image from path: /tmp/build.1930826597.tar
I0904 06:08:01.080575 1129646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 06:08:01.100759 1129646 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1930826597.tar
I0904 06:08:01.106796 1129646 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1930826597.tar: stat -c "%s %y" /var/lib/minikube/build/build.1930826597.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1930826597.tar': No such file or directory
I0904 06:08:01.106873 1129646 ssh_runner.go:362] scp /tmp/build.1930826597.tar --> /var/lib/minikube/build/build.1930826597.tar (3072 bytes)
I0904 06:08:01.143765 1129646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1930826597
I0904 06:08:01.164972 1129646 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1930826597 -xf /var/lib/minikube/build/build.1930826597.tar
I0904 06:08:01.180749 1129646 crio.go:315] Building image: /var/lib/minikube/build/build.1930826597
I0904 06:08:01.180825 1129646 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-968890 /var/lib/minikube/build/build.1930826597 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0904 06:08:04.152206 1129646 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-968890 /var/lib/minikube/build/build.1930826597 --cgroup-manager=cgroupfs: (2.971351604s)
I0904 06:08:04.152289 1129646 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1930826597
I0904 06:08:04.165853 1129646 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1930826597.tar
I0904 06:08:04.178586 1129646 build_images.go:217] Built localhost/my-image:functional-968890 from /tmp/build.1930826597.tar
I0904 06:08:04.178637 1129646 build_images.go:133] succeeded building to: functional-968890
I0904 06:08:04.178645 1129646 build_images.go:134] failed building to: 
I0904 06:08:04.178680 1129646 main.go:141] libmachine: Making call to close driver server
I0904 06:08:04.178697 1129646 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:04.179021 1129646 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:04.179046 1129646 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 06:08:04.179063 1129646 main.go:141] libmachine: Making call to close driver server
I0904 06:08:04.179072 1129646 main.go:141] libmachine: (functional-968890) Calling .Close
I0904 06:08:04.179334 1129646 main.go:141] libmachine: (functional-968890) DBG | Closing plugin on server side
I0904 06:08:04.179377 1129646 main.go:141] libmachine: Successfully made call to close driver server
I0904 06:08:04.179410 1129646 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.73111309s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-968890
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "315.119009ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.076388ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 update-context --alsologtostderr -v=2
I0904 06:07:58.945077 1120074 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "317.756317ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.461855ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image load --daemon kicbase/echo-server:functional-968890 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 image load --daemon kicbase/echo-server:functional-968890 --alsologtostderr: (1.202672669s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdany-port769761064/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1756966048109528464" to /tmp/TestFunctionalparallelMountCmdany-port769761064/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1756966048109528464" to /tmp/TestFunctionalparallelMountCmdany-port769761064/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1756966048109528464" to /tmp/TestFunctionalparallelMountCmdany-port769761064/001/test-1756966048109528464
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.682187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:07:28.348535 1120074 retry.go:31] will retry after 294.01634ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 06:07 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 06:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 06:07 test-1756966048109528464
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh cat /mount-9p/test-1756966048109528464
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-968890 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [875f4fda-5d6d-4084-8f52-9cf2d2909a48] Pending
helpers_test.go:352: "busybox-mount" [875f4fda-5d6d-4084-8f52-9cf2d2909a48] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [875f4fda-5d6d-4084-8f52-9cf2d2909a48] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [875f4fda-5d6d-4084-8f52-9cf2d2909a48] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.003446439s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-968890 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdany-port769761064/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image load --daemon kicbase/echo-server:functional-968890 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-968890
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image load --daemon kicbase/echo-server:functional-968890 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls
E0904 06:07:32.124952 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image save kicbase/echo-server:functional-968890 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 image save kicbase/echo-server:functional-968890 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (6.821720364s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image rm kicbase/echo-server:functional-968890 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-968890
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 image save --daemon kicbase/echo-server:functional-968890 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-968890
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdspecific-port1096468954/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.337202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:07:45.842649 1120074 retry.go:31] will retry after 381.130314ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdspecific-port1096468954/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh "sudo umount -f /mount-9p": exit status 1 (226.181872ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-968890 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdspecific-port1096468954/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1958639921/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1958639921/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1958639921/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T" /mount1: exit status 1 (292.306317ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0904 06:07:47.606293 1120074 retry.go:31] will retry after 321.173981ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-968890 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1958639921/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1958639921/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-968890 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1958639921/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-968890 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-968890 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-lpkxm" [d9c4265e-0d45-4b43-9ada-38654c814c05] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-lpkxm" [d9c4265e-0d45-4b43-9ada-38654c814c05] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.0039641s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 service list: (1.30210206s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 service list -o json
2025/09/04 06:08:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-968890 service list -o json: (1.246642928s)
functional_test.go:1504: Took "1.246744152s" to run "out/minikube-linux-amd64 -p functional-968890 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.194:30358
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-968890 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.194:30358
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-968890
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-968890
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-968890
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (250.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0904 06:12:04.418339 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m9.726368789s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (250.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 kubectl -- rollout status deployment/busybox: (4.673089529s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-9xtj9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-plhvx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-q5gx8 -- nslookup kubernetes.io
E0904 06:12:26.528229 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:26.534663 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:26.546177 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:26.567660 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-9xtj9 -- nslookup kubernetes.default
E0904 06:12:26.609720 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:26.691253 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-plhvx -- nslookup kubernetes.default
E0904 06:12:26.853316 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-q5gx8 -- nslookup kubernetes.default
E0904 06:12:27.175294 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-9xtj9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-plhvx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-q5gx8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
E0904 06:12:27.816598 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-9xtj9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-9xtj9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-plhvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-plhvx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-q5gx8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 kubectl -- exec busybox-7b57f96db7-q5gx8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 node add --alsologtostderr -v 5
E0904 06:12:29.098446 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:31.660793 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:36.782115 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:12:47.024497 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:13:07.506198 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 node add --alsologtostderr -v 5: (51.743081768s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-480726 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp testdata/cp-test.txt ha-480726:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile122795483/001/cp-test_ha-480726.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726:/home/docker/cp-test.txt ha-480726-m02:/home/docker/cp-test_ha-480726_ha-480726-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test_ha-480726_ha-480726-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726:/home/docker/cp-test.txt ha-480726-m03:/home/docker/cp-test_ha-480726_ha-480726-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test_ha-480726_ha-480726-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726:/home/docker/cp-test.txt ha-480726-m04:/home/docker/cp-test_ha-480726_ha-480726-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test_ha-480726_ha-480726-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp testdata/cp-test.txt ha-480726-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile122795483/001/cp-test_ha-480726-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m02:/home/docker/cp-test.txt ha-480726:/home/docker/cp-test_ha-480726-m02_ha-480726.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test_ha-480726-m02_ha-480726.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m02:/home/docker/cp-test.txt ha-480726-m03:/home/docker/cp-test_ha-480726-m02_ha-480726-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test_ha-480726-m02_ha-480726-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m02:/home/docker/cp-test.txt ha-480726-m04:/home/docker/cp-test_ha-480726-m02_ha-480726-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test_ha-480726-m02_ha-480726-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp testdata/cp-test.txt ha-480726-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile122795483/001/cp-test_ha-480726-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m03:/home/docker/cp-test.txt ha-480726:/home/docker/cp-test_ha-480726-m03_ha-480726.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test_ha-480726-m03_ha-480726.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m03:/home/docker/cp-test.txt ha-480726-m02:/home/docker/cp-test_ha-480726-m03_ha-480726-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test_ha-480726-m03_ha-480726-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m03:/home/docker/cp-test.txt ha-480726-m04:/home/docker/cp-test_ha-480726-m03_ha-480726-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test_ha-480726-m03_ha-480726-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp testdata/cp-test.txt ha-480726-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile122795483/001/cp-test_ha-480726-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m04:/home/docker/cp-test.txt ha-480726:/home/docker/cp-test_ha-480726-m04_ha-480726.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726 "sudo cat /home/docker/cp-test_ha-480726-m04_ha-480726.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m04:/home/docker/cp-test.txt ha-480726-m02:/home/docker/cp-test_ha-480726-m04_ha-480726-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m02 "sudo cat /home/docker/cp-test_ha-480726-m04_ha-480726-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 cp ha-480726-m04:/home/docker/cp-test.txt ha-480726-m03:/home/docker/cp-test_ha-480726-m04_ha-480726-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 ssh -n ha-480726-m03 "sudo cat /home/docker/cp-test_ha-480726-m04_ha-480726-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 node stop m02 --alsologtostderr -v 5
E0904 06:13:48.467765 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 node stop m02 --alsologtostderr -v 5: (1m30.994433391s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5: exit status 7 (685.36527ms)

                                                
                                                
-- stdout --
	ha-480726
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480726-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480726-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480726-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:15:07.145815 1134541 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:15:07.145924 1134541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:15:07.145931 1134541 out.go:374] Setting ErrFile to fd 2...
	I0904 06:15:07.145936 1134541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:15:07.146129 1134541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:15:07.146301 1134541 out.go:368] Setting JSON to false
	I0904 06:15:07.146330 1134541 mustload.go:65] Loading cluster: ha-480726
	I0904 06:15:07.146367 1134541 notify.go:220] Checking for updates...
	I0904 06:15:07.146707 1134541 config.go:182] Loaded profile config "ha-480726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:15:07.146725 1134541 status.go:174] checking status of ha-480726 ...
	I0904 06:15:07.147184 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.147245 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.165536 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I0904 06:15:07.166123 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.166777 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.166807 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.167190 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.167394 1134541 main.go:141] libmachine: (ha-480726) Calling .GetState
	I0904 06:15:07.169085 1134541 status.go:371] ha-480726 host status = "Running" (err=<nil>)
	I0904 06:15:07.169102 1134541 host.go:66] Checking if "ha-480726" exists ...
	I0904 06:15:07.169374 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.169418 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.185875 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I0904 06:15:07.186398 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.186967 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.187007 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.187375 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.187567 1134541 main.go:141] libmachine: (ha-480726) Calling .GetIP
	I0904 06:15:07.190362 1134541 main.go:141] libmachine: (ha-480726) DBG | domain ha-480726 has defined MAC address 52:54:00:58:b0:1e in network mk-ha-480726
	I0904 06:15:07.190766 1134541 main.go:141] libmachine: (ha-480726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:b0:1e", ip: ""} in network mk-ha-480726: {Iface:virbr1 ExpiryTime:2025-09-04 07:08:25 +0000 UTC Type:0 Mac:52:54:00:58:b0:1e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-480726 Clientid:01:52:54:00:58:b0:1e}
	I0904 06:15:07.190785 1134541 main.go:141] libmachine: (ha-480726) DBG | domain ha-480726 has defined IP address 192.168.39.210 and MAC address 52:54:00:58:b0:1e in network mk-ha-480726
	I0904 06:15:07.190970 1134541 host.go:66] Checking if "ha-480726" exists ...
	I0904 06:15:07.191260 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.191306 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.206591 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0904 06:15:07.207169 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.207763 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.207784 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.208165 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.208355 1134541 main.go:141] libmachine: (ha-480726) Calling .DriverName
	I0904 06:15:07.208627 1134541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:15:07.208667 1134541 main.go:141] libmachine: (ha-480726) Calling .GetSSHHostname
	I0904 06:15:07.211558 1134541 main.go:141] libmachine: (ha-480726) DBG | domain ha-480726 has defined MAC address 52:54:00:58:b0:1e in network mk-ha-480726
	I0904 06:15:07.212023 1134541 main.go:141] libmachine: (ha-480726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:b0:1e", ip: ""} in network mk-ha-480726: {Iface:virbr1 ExpiryTime:2025-09-04 07:08:25 +0000 UTC Type:0 Mac:52:54:00:58:b0:1e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-480726 Clientid:01:52:54:00:58:b0:1e}
	I0904 06:15:07.212050 1134541 main.go:141] libmachine: (ha-480726) DBG | domain ha-480726 has defined IP address 192.168.39.210 and MAC address 52:54:00:58:b0:1e in network mk-ha-480726
	I0904 06:15:07.212158 1134541 main.go:141] libmachine: (ha-480726) Calling .GetSSHPort
	I0904 06:15:07.212325 1134541 main.go:141] libmachine: (ha-480726) Calling .GetSSHKeyPath
	I0904 06:15:07.212474 1134541 main.go:141] libmachine: (ha-480726) Calling .GetSSHUsername
	I0904 06:15:07.212624 1134541 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/ha-480726/id_rsa Username:docker}
	I0904 06:15:07.299967 1134541 ssh_runner.go:195] Run: systemctl --version
	I0904 06:15:07.306765 1134541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:15:07.324500 1134541 kubeconfig.go:125] found "ha-480726" server: "https://192.168.39.254:8443"
	I0904 06:15:07.324544 1134541 api_server.go:166] Checking apiserver status ...
	I0904 06:15:07.324604 1134541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:15:07.347682 1134541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	W0904 06:15:07.360896 1134541 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:15:07.360978 1134541 ssh_runner.go:195] Run: ls
	I0904 06:15:07.366308 1134541 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0904 06:15:07.372826 1134541 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0904 06:15:07.372857 1134541 status.go:463] ha-480726 apiserver status = Running (err=<nil>)
	I0904 06:15:07.372868 1134541 status.go:176] ha-480726 status: &{Name:ha-480726 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:15:07.372901 1134541 status.go:174] checking status of ha-480726-m02 ...
	I0904 06:15:07.373290 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.373344 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.390408 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44647
	I0904 06:15:07.390980 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.391521 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.391541 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.391905 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.392089 1134541 main.go:141] libmachine: (ha-480726-m02) Calling .GetState
	I0904 06:15:07.393618 1134541 status.go:371] ha-480726-m02 host status = "Stopped" (err=<nil>)
	I0904 06:15:07.393631 1134541 status.go:384] host is not running, skipping remaining checks
	I0904 06:15:07.393637 1134541 status.go:176] ha-480726-m02 status: &{Name:ha-480726-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:15:07.393654 1134541 status.go:174] checking status of ha-480726-m03 ...
	I0904 06:15:07.394034 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.394077 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.409925 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0904 06:15:07.410447 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.411031 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.411055 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.411400 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.411582 1134541 main.go:141] libmachine: (ha-480726-m03) Calling .GetState
	I0904 06:15:07.413064 1134541 status.go:371] ha-480726-m03 host status = "Running" (err=<nil>)
	I0904 06:15:07.413082 1134541 host.go:66] Checking if "ha-480726-m03" exists ...
	I0904 06:15:07.413525 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.413575 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.429009 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36589
	I0904 06:15:07.429561 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.430084 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.430110 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.430466 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.430679 1134541 main.go:141] libmachine: (ha-480726-m03) Calling .GetIP
	I0904 06:15:07.433462 1134541 main.go:141] libmachine: (ha-480726-m03) DBG | domain ha-480726-m03 has defined MAC address 52:54:00:78:05:4d in network mk-ha-480726
	I0904 06:15:07.433976 1134541 main.go:141] libmachine: (ha-480726-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:05:4d", ip: ""} in network mk-ha-480726: {Iface:virbr1 ExpiryTime:2025-09-04 07:11:03 +0000 UTC Type:0 Mac:52:54:00:78:05:4d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-480726-m03 Clientid:01:52:54:00:78:05:4d}
	I0904 06:15:07.433997 1134541 main.go:141] libmachine: (ha-480726-m03) DBG | domain ha-480726-m03 has defined IP address 192.168.39.104 and MAC address 52:54:00:78:05:4d in network mk-ha-480726
	I0904 06:15:07.434155 1134541 host.go:66] Checking if "ha-480726-m03" exists ...
	I0904 06:15:07.434495 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.434550 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.450066 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39167
	I0904 06:15:07.450485 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.450965 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.450986 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.451335 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.451526 1134541 main.go:141] libmachine: (ha-480726-m03) Calling .DriverName
	I0904 06:15:07.451732 1134541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:15:07.451760 1134541 main.go:141] libmachine: (ha-480726-m03) Calling .GetSSHHostname
	I0904 06:15:07.454629 1134541 main.go:141] libmachine: (ha-480726-m03) DBG | domain ha-480726-m03 has defined MAC address 52:54:00:78:05:4d in network mk-ha-480726
	I0904 06:15:07.455086 1134541 main.go:141] libmachine: (ha-480726-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:05:4d", ip: ""} in network mk-ha-480726: {Iface:virbr1 ExpiryTime:2025-09-04 07:11:03 +0000 UTC Type:0 Mac:52:54:00:78:05:4d Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-480726-m03 Clientid:01:52:54:00:78:05:4d}
	I0904 06:15:07.455110 1134541 main.go:141] libmachine: (ha-480726-m03) DBG | domain ha-480726-m03 has defined IP address 192.168.39.104 and MAC address 52:54:00:78:05:4d in network mk-ha-480726
	I0904 06:15:07.455270 1134541 main.go:141] libmachine: (ha-480726-m03) Calling .GetSSHPort
	I0904 06:15:07.455464 1134541 main.go:141] libmachine: (ha-480726-m03) Calling .GetSSHKeyPath
	I0904 06:15:07.455667 1134541 main.go:141] libmachine: (ha-480726-m03) Calling .GetSSHUsername
	I0904 06:15:07.455829 1134541 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/ha-480726-m03/id_rsa Username:docker}
	I0904 06:15:07.543531 1134541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:15:07.563953 1134541 kubeconfig.go:125] found "ha-480726" server: "https://192.168.39.254:8443"
	I0904 06:15:07.563999 1134541 api_server.go:166] Checking apiserver status ...
	I0904 06:15:07.564056 1134541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:15:07.585963 1134541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1733/cgroup
	W0904 06:15:07.597400 1134541 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1733/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:15:07.597461 1134541 ssh_runner.go:195] Run: ls
	I0904 06:15:07.603240 1134541 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0904 06:15:07.608074 1134541 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0904 06:15:07.608096 1134541 status.go:463] ha-480726-m03 apiserver status = Running (err=<nil>)
	I0904 06:15:07.608105 1134541 status.go:176] ha-480726-m03 status: &{Name:ha-480726-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:15:07.608121 1134541 status.go:174] checking status of ha-480726-m04 ...
	I0904 06:15:07.608403 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.608439 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.626107 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36353
	I0904 06:15:07.626578 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.627201 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.627222 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.627536 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.627728 1134541 main.go:141] libmachine: (ha-480726-m04) Calling .GetState
	I0904 06:15:07.629133 1134541 status.go:371] ha-480726-m04 host status = "Running" (err=<nil>)
	I0904 06:15:07.629152 1134541 host.go:66] Checking if "ha-480726-m04" exists ...
	I0904 06:15:07.629436 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.629493 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.644719 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0904 06:15:07.645204 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.645679 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.645709 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.646041 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.646244 1134541 main.go:141] libmachine: (ha-480726-m04) Calling .GetIP
	I0904 06:15:07.649344 1134541 main.go:141] libmachine: (ha-480726-m04) DBG | domain ha-480726-m04 has defined MAC address 52:54:00:98:ce:58 in network mk-ha-480726
	I0904 06:15:07.649764 1134541 main.go:141] libmachine: (ha-480726-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:ce:58", ip: ""} in network mk-ha-480726: {Iface:virbr1 ExpiryTime:2025-09-04 07:12:44 +0000 UTC Type:0 Mac:52:54:00:98:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-480726-m04 Clientid:01:52:54:00:98:ce:58}
	I0904 06:15:07.649791 1134541 main.go:141] libmachine: (ha-480726-m04) DBG | domain ha-480726-m04 has defined IP address 192.168.39.118 and MAC address 52:54:00:98:ce:58 in network mk-ha-480726
	I0904 06:15:07.649997 1134541 host.go:66] Checking if "ha-480726-m04" exists ...
	I0904 06:15:07.650351 1134541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:15:07.650394 1134541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:15:07.666368 1134541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0904 06:15:07.666806 1134541 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:15:07.667248 1134541 main.go:141] libmachine: Using API Version  1
	I0904 06:15:07.667267 1134541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:15:07.667680 1134541 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:15:07.667910 1134541 main.go:141] libmachine: (ha-480726-m04) Calling .DriverName
	I0904 06:15:07.668098 1134541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:15:07.668124 1134541 main.go:141] libmachine: (ha-480726-m04) Calling .GetSSHHostname
	I0904 06:15:07.671036 1134541 main.go:141] libmachine: (ha-480726-m04) DBG | domain ha-480726-m04 has defined MAC address 52:54:00:98:ce:58 in network mk-ha-480726
	I0904 06:15:07.671506 1134541 main.go:141] libmachine: (ha-480726-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:ce:58", ip: ""} in network mk-ha-480726: {Iface:virbr1 ExpiryTime:2025-09-04 07:12:44 +0000 UTC Type:0 Mac:52:54:00:98:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:ha-480726-m04 Clientid:01:52:54:00:98:ce:58}
	I0904 06:15:07.671533 1134541 main.go:141] libmachine: (ha-480726-m04) DBG | domain ha-480726-m04 has defined IP address 192.168.39.118 and MAC address 52:54:00:98:ce:58 in network mk-ha-480726
	I0904 06:15:07.671713 1134541 main.go:141] libmachine: (ha-480726-m04) Calling .GetSSHPort
	I0904 06:15:07.671923 1134541 main.go:141] libmachine: (ha-480726-m04) Calling .GetSSHKeyPath
	I0904 06:15:07.672047 1134541 main.go:141] libmachine: (ha-480726-m04) Calling .GetSSHUsername
	I0904 06:15:07.672226 1134541 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/ha-480726-m04/id_rsa Username:docker}
	I0904 06:15:07.759430 1134541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:15:07.778611 1134541 status.go:176] ha-480726-m04 status: &{Name:ha-480726-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 node start m02 --alsologtostderr -v 5
E0904 06:15:10.389298 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 node start m02 --alsologtostderr -v 5: (31.309085485s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5: (1.096045081s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.09312317s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (409.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 stop --alsologtostderr -v 5
E0904 06:17:04.417661 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:17:26.528924 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:17:54.231551 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:18:27.487183 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 stop --alsologtostderr -v 5: (4m34.45610449s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 start --wait true --alsologtostderr -v 5
E0904 06:22:04.417100 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:22:26.528628 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 start --wait true --alsologtostderr -v 5: (2m15.249617873s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (409.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 node delete m03 --alsologtostderr -v 5: (17.65655249s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 stop --alsologtostderr -v 5
E0904 06:27:04.417221 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 stop --alsologtostderr -v 5: (4m32.635604816s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5: exit status 7 (113.009664ms)

                                                
                                                
-- stdout --
	ha-480726
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480726-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480726-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:27:23.648782 1139134 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:27:23.649074 1139134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:27:23.649084 1139134 out.go:374] Setting ErrFile to fd 2...
	I0904 06:27:23.649089 1139134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:27:23.649289 1139134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:27:23.649516 1139134 out.go:368] Setting JSON to false
	I0904 06:27:23.649554 1139134 mustload.go:65] Loading cluster: ha-480726
	I0904 06:27:23.649659 1139134 notify.go:220] Checking for updates...
	I0904 06:27:23.649986 1139134 config.go:182] Loaded profile config "ha-480726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:27:23.650011 1139134 status.go:174] checking status of ha-480726 ...
	I0904 06:27:23.650439 1139134 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:27:23.650495 1139134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:27:23.672831 1139134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33569
	I0904 06:27:23.673319 1139134 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:27:23.673908 1139134 main.go:141] libmachine: Using API Version  1
	I0904 06:27:23.673932 1139134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:27:23.674330 1139134 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:27:23.674573 1139134 main.go:141] libmachine: (ha-480726) Calling .GetState
	I0904 06:27:23.676193 1139134 status.go:371] ha-480726 host status = "Stopped" (err=<nil>)
	I0904 06:27:23.676206 1139134 status.go:384] host is not running, skipping remaining checks
	I0904 06:27:23.676212 1139134 status.go:176] ha-480726 status: &{Name:ha-480726 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:27:23.676230 1139134 status.go:174] checking status of ha-480726-m02 ...
	I0904 06:27:23.676561 1139134 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:27:23.676607 1139134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:27:23.691380 1139134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I0904 06:27:23.691821 1139134 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:27:23.692269 1139134 main.go:141] libmachine: Using API Version  1
	I0904 06:27:23.692289 1139134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:27:23.692591 1139134 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:27:23.692763 1139134 main.go:141] libmachine: (ha-480726-m02) Calling .GetState
	I0904 06:27:23.694214 1139134 status.go:371] ha-480726-m02 host status = "Stopped" (err=<nil>)
	I0904 06:27:23.694225 1139134 status.go:384] host is not running, skipping remaining checks
	I0904 06:27:23.694230 1139134 status.go:176] ha-480726-m02 status: &{Name:ha-480726-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:27:23.694244 1139134 status.go:174] checking status of ha-480726-m04 ...
	I0904 06:27:23.694508 1139134 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:27:23.694545 1139134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:27:23.708859 1139134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0904 06:27:23.709233 1139134 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:27:23.709624 1139134 main.go:141] libmachine: Using API Version  1
	I0904 06:27:23.709648 1139134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:27:23.709966 1139134 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:27:23.710158 1139134 main.go:141] libmachine: (ha-480726-m04) Calling .GetState
	I0904 06:27:23.711409 1139134 status.go:371] ha-480726-m04 host status = "Stopped" (err=<nil>)
	I0904 06:27:23.711421 1139134 status.go:384] host is not running, skipping remaining checks
	I0904 06:27:23.711426 1139134 status.go:176] ha-480726-m04 status: &{Name:ha-480726-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (108.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0904 06:27:26.528958 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:28:49.593733 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m47.774044156s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (108.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-480726 node add --control-plane --alsologtostderr -v 5: (1m16.264743021s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-480726 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (92.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-344852 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E0904 06:32:04.424608 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-344852 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.854884258s)
--- PASS: TestJSONOutput/start/Command (92.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-344852 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-344852 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-344852 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-344852 --output=json --user=testUser: (7.33832826s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-132144 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-132144 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (63.786948ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d38a94fd-42af-46bc-b489-2dd13663c240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-132144] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"60e96cfc-2cef-4782-879a-0725dad67db7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"b00d8a44-7db9-40f3-80b7-3d9a18522c97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5a765b5d-6de7-4d87-8fd6-8ca875a76ea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig"}}
	{"specversion":"1.0","id":"a7043071-c8ab-4dc0-8751-9e8e3e74e931","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube"}}
	{"specversion":"1.0","id":"2fb36953-80c9-46b0-8e84-8761301d7e4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6e32a194-7ce4-4ffb-8acc-1f58d51fd2d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c37093a-dcb4-4c6b-81db-a21fb457edad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-132144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-132144
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-184951 --driver=kvm2  --container-runtime=crio
E0904 06:32:26.529398 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-184951 --driver=kvm2  --container-runtime=crio: (43.134585329s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-207178 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-207178 --driver=kvm2  --container-runtime=crio: (48.688443686s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-184951
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-207178
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-207178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-207178
helpers_test.go:175: Cleaning up "first-184951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-184951
--- PASS: TestMinikubeProfile (94.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-772634 --memory=3072 --mount-string /tmp/TestMountStartserial1712469728/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-772634 --memory=3072 --mount-string /tmp/TestMountStartserial1712469728/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.89274266s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-772634 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-772634 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-787164 --memory=3072 --mount-string /tmp/TestMountStartserial1712469728/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-787164 --memory=3072 --mount-string /tmp/TestMountStartserial1712469728/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.03094s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-787164 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-787164 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.02s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-772634 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-772634 --alsologtostderr -v=5: (1.02233979s)
--- PASS: TestMountStart/serial/DeleteFirst (1.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-787164 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-787164 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-787164
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-787164: (1.702538131s)
--- PASS: TestMountStart/serial/Stop (1.70s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-787164
E0904 06:35:07.489157 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-787164: (22.696791483s)
--- PASS: TestMountStart/serial/RestartStopped (23.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-787164 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-787164 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140177 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0904 06:37:04.417410 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140177 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.510925785s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-140177 -- rollout status deployment/busybox: (4.308693563s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-bdgcr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-j7qv9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-bdgcr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-j7qv9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-bdgcr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-j7qv9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-bdgcr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-bdgcr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-j7qv9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-140177 -- exec busybox-7b57f96db7-j7qv9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-140177 -v=5 --alsologtostderr
E0904 06:37:26.528344 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-140177 -v=5 --alsologtostderr: (49.716977333s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-140177 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp testdata/cp-test.txt multinode-140177:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile135668807/001/cp-test_multinode-140177.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177:/home/docker/cp-test.txt multinode-140177-m02:/home/docker/cp-test_multinode-140177_multinode-140177-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m02 "sudo cat /home/docker/cp-test_multinode-140177_multinode-140177-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177:/home/docker/cp-test.txt multinode-140177-m03:/home/docker/cp-test_multinode-140177_multinode-140177-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m03 "sudo cat /home/docker/cp-test_multinode-140177_multinode-140177-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp testdata/cp-test.txt multinode-140177-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile135668807/001/cp-test_multinode-140177-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177-m02:/home/docker/cp-test.txt multinode-140177:/home/docker/cp-test_multinode-140177-m02_multinode-140177.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177 "sudo cat /home/docker/cp-test_multinode-140177-m02_multinode-140177.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177-m02:/home/docker/cp-test.txt multinode-140177-m03:/home/docker/cp-test_multinode-140177-m02_multinode-140177-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m03 "sudo cat /home/docker/cp-test_multinode-140177-m02_multinode-140177-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp testdata/cp-test.txt multinode-140177-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile135668807/001/cp-test_multinode-140177-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177-m03:/home/docker/cp-test.txt multinode-140177:/home/docker/cp-test_multinode-140177-m03_multinode-140177.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177 "sudo cat /home/docker/cp-test_multinode-140177-m03_multinode-140177.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 cp multinode-140177-m03:/home/docker/cp-test.txt multinode-140177-m02:/home/docker/cp-test_multinode-140177-m03_multinode-140177-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 ssh -n multinode-140177-m02 "sudo cat /home/docker/cp-test_multinode-140177-m03_multinode-140177-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-140177 node stop m03: (2.285715672s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140177 status: exit status 7 (433.128034ms)

                                                
                                                
-- stdout --
	multinode-140177
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-140177-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-140177-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr: exit status 7 (437.453242ms)

                                                
                                                
-- stdout --
	multinode-140177
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-140177-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-140177-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:38:16.697842 1147504 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:38:16.697938 1147504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:38:16.697946 1147504 out.go:374] Setting ErrFile to fd 2...
	I0904 06:38:16.697950 1147504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:38:16.698163 1147504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:38:16.698332 1147504 out.go:368] Setting JSON to false
	I0904 06:38:16.698362 1147504 mustload.go:65] Loading cluster: multinode-140177
	I0904 06:38:16.698460 1147504 notify.go:220] Checking for updates...
	I0904 06:38:16.698871 1147504 config.go:182] Loaded profile config "multinode-140177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:38:16.698899 1147504 status.go:174] checking status of multinode-140177 ...
	I0904 06:38:16.699404 1147504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:38:16.699465 1147504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:38:16.715173 1147504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0904 06:38:16.715662 1147504 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:38:16.716274 1147504 main.go:141] libmachine: Using API Version  1
	I0904 06:38:16.716304 1147504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:38:16.716598 1147504 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:38:16.716765 1147504 main.go:141] libmachine: (multinode-140177) Calling .GetState
	I0904 06:38:16.718361 1147504 status.go:371] multinode-140177 host status = "Running" (err=<nil>)
	I0904 06:38:16.718376 1147504 host.go:66] Checking if "multinode-140177" exists ...
	I0904 06:38:16.718691 1147504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:38:16.718740 1147504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:38:16.734752 1147504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0904 06:38:16.735165 1147504 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:38:16.735599 1147504 main.go:141] libmachine: Using API Version  1
	I0904 06:38:16.735620 1147504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:38:16.735959 1147504 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:38:16.736144 1147504 main.go:141] libmachine: (multinode-140177) Calling .GetIP
	I0904 06:38:16.738807 1147504 main.go:141] libmachine: (multinode-140177) DBG | domain multinode-140177 has defined MAC address 52:54:00:f5:dd:59 in network mk-multinode-140177
	I0904 06:38:16.739207 1147504 main.go:141] libmachine: (multinode-140177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:dd:59", ip: ""} in network mk-multinode-140177: {Iface:virbr1 ExpiryTime:2025-09-04 07:35:31 +0000 UTC Type:0 Mac:52:54:00:f5:dd:59 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-140177 Clientid:01:52:54:00:f5:dd:59}
	I0904 06:38:16.739236 1147504 main.go:141] libmachine: (multinode-140177) DBG | domain multinode-140177 has defined IP address 192.168.39.233 and MAC address 52:54:00:f5:dd:59 in network mk-multinode-140177
	I0904 06:38:16.739361 1147504 host.go:66] Checking if "multinode-140177" exists ...
	I0904 06:38:16.739652 1147504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:38:16.739686 1147504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:38:16.754810 1147504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0904 06:38:16.755327 1147504 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:38:16.755825 1147504 main.go:141] libmachine: Using API Version  1
	I0904 06:38:16.755852 1147504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:38:16.756170 1147504 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:38:16.756360 1147504 main.go:141] libmachine: (multinode-140177) Calling .DriverName
	I0904 06:38:16.756572 1147504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:38:16.756591 1147504 main.go:141] libmachine: (multinode-140177) Calling .GetSSHHostname
	I0904 06:38:16.759165 1147504 main.go:141] libmachine: (multinode-140177) DBG | domain multinode-140177 has defined MAC address 52:54:00:f5:dd:59 in network mk-multinode-140177
	I0904 06:38:16.759593 1147504 main.go:141] libmachine: (multinode-140177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:dd:59", ip: ""} in network mk-multinode-140177: {Iface:virbr1 ExpiryTime:2025-09-04 07:35:31 +0000 UTC Type:0 Mac:52:54:00:f5:dd:59 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-140177 Clientid:01:52:54:00:f5:dd:59}
	I0904 06:38:16.759634 1147504 main.go:141] libmachine: (multinode-140177) DBG | domain multinode-140177 has defined IP address 192.168.39.233 and MAC address 52:54:00:f5:dd:59 in network mk-multinode-140177
	I0904 06:38:16.759837 1147504 main.go:141] libmachine: (multinode-140177) Calling .GetSSHPort
	I0904 06:38:16.760031 1147504 main.go:141] libmachine: (multinode-140177) Calling .GetSSHKeyPath
	I0904 06:38:16.760223 1147504 main.go:141] libmachine: (multinode-140177) Calling .GetSSHUsername
	I0904 06:38:16.760397 1147504 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/multinode-140177/id_rsa Username:docker}
	I0904 06:38:16.844286 1147504 ssh_runner.go:195] Run: systemctl --version
	I0904 06:38:16.850464 1147504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:38:16.866508 1147504 kubeconfig.go:125] found "multinode-140177" server: "https://192.168.39.233:8443"
	I0904 06:38:16.866546 1147504 api_server.go:166] Checking apiserver status ...
	I0904 06:38:16.866582 1147504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 06:38:16.884416 1147504 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1362/cgroup
	W0904 06:38:16.894617 1147504 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1362/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 06:38:16.894664 1147504 ssh_runner.go:195] Run: ls
	I0904 06:38:16.900324 1147504 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0904 06:38:16.904809 1147504 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0904 06:38:16.904835 1147504 status.go:463] multinode-140177 apiserver status = Running (err=<nil>)
	I0904 06:38:16.904845 1147504 status.go:176] multinode-140177 status: &{Name:multinode-140177 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:38:16.904871 1147504 status.go:174] checking status of multinode-140177-m02 ...
	I0904 06:38:16.905168 1147504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:38:16.905203 1147504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:38:16.920688 1147504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I0904 06:38:16.921181 1147504 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:38:16.921702 1147504 main.go:141] libmachine: Using API Version  1
	I0904 06:38:16.921727 1147504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:38:16.922115 1147504 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:38:16.922308 1147504 main.go:141] libmachine: (multinode-140177-m02) Calling .GetState
	I0904 06:38:16.923849 1147504 status.go:371] multinode-140177-m02 host status = "Running" (err=<nil>)
	I0904 06:38:16.923867 1147504 host.go:66] Checking if "multinode-140177-m02" exists ...
	I0904 06:38:16.924145 1147504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:38:16.924181 1147504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:38:16.939524 1147504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0904 06:38:16.939918 1147504 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:38:16.940348 1147504 main.go:141] libmachine: Using API Version  1
	I0904 06:38:16.940369 1147504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:38:16.940808 1147504 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:38:16.940992 1147504 main.go:141] libmachine: (multinode-140177-m02) Calling .GetIP
	I0904 06:38:16.943567 1147504 main.go:141] libmachine: (multinode-140177-m02) DBG | domain multinode-140177-m02 has defined MAC address 52:54:00:c7:97:73 in network mk-multinode-140177
	I0904 06:38:16.944138 1147504 main.go:141] libmachine: (multinode-140177-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:97:73", ip: ""} in network mk-multinode-140177: {Iface:virbr1 ExpiryTime:2025-09-04 07:36:33 +0000 UTC Type:0 Mac:52:54:00:c7:97:73 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:multinode-140177-m02 Clientid:01:52:54:00:c7:97:73}
	I0904 06:38:16.944183 1147504 main.go:141] libmachine: (multinode-140177-m02) DBG | domain multinode-140177-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:c7:97:73 in network mk-multinode-140177
	I0904 06:38:16.944290 1147504 host.go:66] Checking if "multinode-140177-m02" exists ...
	I0904 06:38:16.944631 1147504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:38:16.944685 1147504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:38:16.959905 1147504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I0904 06:38:16.960328 1147504 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:38:16.960796 1147504 main.go:141] libmachine: Using API Version  1
	I0904 06:38:16.960821 1147504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:38:16.961130 1147504 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:38:16.961321 1147504 main.go:141] libmachine: (multinode-140177-m02) Calling .DriverName
	I0904 06:38:16.961488 1147504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 06:38:16.961506 1147504 main.go:141] libmachine: (multinode-140177-m02) Calling .GetSSHHostname
	I0904 06:38:16.963938 1147504 main.go:141] libmachine: (multinode-140177-m02) DBG | domain multinode-140177-m02 has defined MAC address 52:54:00:c7:97:73 in network mk-multinode-140177
	I0904 06:38:16.964266 1147504 main.go:141] libmachine: (multinode-140177-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:97:73", ip: ""} in network mk-multinode-140177: {Iface:virbr1 ExpiryTime:2025-09-04 07:36:33 +0000 UTC Type:0 Mac:52:54:00:c7:97:73 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:multinode-140177-m02 Clientid:01:52:54:00:c7:97:73}
	I0904 06:38:16.964286 1147504 main.go:141] libmachine: (multinode-140177-m02) DBG | domain multinode-140177-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:c7:97:73 in network mk-multinode-140177
	I0904 06:38:16.964422 1147504 main.go:141] libmachine: (multinode-140177-m02) Calling .GetSSHPort
	I0904 06:38:16.964605 1147504 main.go:141] libmachine: (multinode-140177-m02) Calling .GetSSHKeyPath
	I0904 06:38:16.964766 1147504 main.go:141] libmachine: (multinode-140177-m02) Calling .GetSSHUsername
	I0904 06:38:16.964901 1147504 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-1115845/.minikube/machines/multinode-140177-m02/id_rsa Username:docker}
	I0904 06:38:17.050162 1147504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 06:38:17.067417 1147504 status.go:176] multinode-140177-m02 status: &{Name:multinode-140177-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:38:17.067459 1147504 status.go:174] checking status of multinode-140177-m03 ...
	I0904 06:38:17.067834 1147504 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:38:17.067882 1147504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:38:17.083600 1147504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0904 06:38:17.084062 1147504 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:38:17.084514 1147504 main.go:141] libmachine: Using API Version  1
	I0904 06:38:17.084550 1147504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:38:17.084886 1147504 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:38:17.085085 1147504 main.go:141] libmachine: (multinode-140177-m03) Calling .GetState
	I0904 06:38:17.086684 1147504 status.go:371] multinode-140177-m03 host status = "Stopped" (err=<nil>)
	I0904 06:38:17.086698 1147504 status.go:384] host is not running, skipping remaining checks
	I0904 06:38:17.086704 1147504 status.go:176] multinode-140177-m03 status: &{Name:multinode-140177-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-140177 node start m03 -v=5 --alsologtostderr: (38.232026006s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (318.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-140177
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-140177
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-140177: (3m3.410225325s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140177 --wait=true -v=5 --alsologtostderr
E0904 06:42:04.417110 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:42:26.529045 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140177 --wait=true -v=5 --alsologtostderr: (2m15.472157822s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-140177
--- PASS: TestMultiNode/serial/RestartKeepsNodes (318.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-140177 node delete m03: (2.295403245s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 stop
E0904 06:45:29.597484 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 06:47:04.424384 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-140177 stop: (3m1.6862668s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140177 status: exit status 7 (95.020841ms)

                                                
                                                
-- stdout --
	multinode-140177
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-140177-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr: exit status 7 (85.022324ms)

                                                
                                                
-- stdout --
	multinode-140177
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-140177-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:47:19.595246 1150367 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:47:19.595494 1150367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:47:19.595502 1150367 out.go:374] Setting ErrFile to fd 2...
	I0904 06:47:19.595506 1150367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:47:19.595680 1150367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:47:19.595839 1150367 out.go:368] Setting JSON to false
	I0904 06:47:19.595870 1150367 mustload.go:65] Loading cluster: multinode-140177
	I0904 06:47:19.595989 1150367 notify.go:220] Checking for updates...
	I0904 06:47:19.596192 1150367 config.go:182] Loaded profile config "multinode-140177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:47:19.596210 1150367 status.go:174] checking status of multinode-140177 ...
	I0904 06:47:19.597505 1150367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:47:19.597607 1150367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:47:19.613889 1150367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0904 06:47:19.614296 1150367 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:47:19.614844 1150367 main.go:141] libmachine: Using API Version  1
	I0904 06:47:19.614877 1150367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:47:19.615200 1150367 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:47:19.615402 1150367 main.go:141] libmachine: (multinode-140177) Calling .GetState
	I0904 06:47:19.616894 1150367 status.go:371] multinode-140177 host status = "Stopped" (err=<nil>)
	I0904 06:47:19.616907 1150367 status.go:384] host is not running, skipping remaining checks
	I0904 06:47:19.616912 1150367 status.go:176] multinode-140177 status: &{Name:multinode-140177 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 06:47:19.616942 1150367 status.go:174] checking status of multinode-140177-m02 ...
	I0904 06:47:19.617198 1150367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21409-1115845/.minikube/bin/docker-machine-driver-kvm2
	I0904 06:47:19.617229 1150367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 06:47:19.631974 1150367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0904 06:47:19.632445 1150367 main.go:141] libmachine: () Calling .GetVersion
	I0904 06:47:19.632895 1150367 main.go:141] libmachine: Using API Version  1
	I0904 06:47:19.632915 1150367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 06:47:19.633238 1150367 main.go:141] libmachine: () Calling .GetMachineName
	I0904 06:47:19.633421 1150367 main.go:141] libmachine: (multinode-140177-m02) Calling .GetState
	I0904 06:47:19.634906 1150367 status.go:371] multinode-140177-m02 host status = "Stopped" (err=<nil>)
	I0904 06:47:19.634921 1150367 status.go:384] host is not running, skipping remaining checks
	I0904 06:47:19.634927 1150367 status.go:176] multinode-140177-m02 status: &{Name:multinode-140177-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (103.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140177 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0904 06:47:26.528452 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140177 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m42.552247094s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-140177 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (103.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-140177
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140177-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-140177-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.614176ms)

                                                
                                                
-- stdout --
	* [multinode-140177-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-140177-m02' is duplicated with machine name 'multinode-140177-m02' in profile 'multinode-140177'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-140177-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-140177-m03 --driver=kvm2  --container-runtime=crio: (45.42054966s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-140177
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-140177: exit status 80 (225.511423ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-140177 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-140177-m03 already exists in multinode-140177-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-140177-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.58s)

                                                
                                    
x
+
TestScheduledStopUnix (115.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-881838 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-881838 --memory=3072 --driver=kvm2  --container-runtime=crio: (43.511142578s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881838 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-881838 -n scheduled-stop-881838
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881838 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0904 06:53:29.576277 1120074 retry.go:31] will retry after 124.912µs: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.577474 1120074 retry.go:31] will retry after 207.382µs: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.578611 1120074 retry.go:31] will retry after 280.639µs: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.579738 1120074 retry.go:31] will retry after 225.803µs: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.580859 1120074 retry.go:31] will retry after 297.633µs: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.581989 1120074 retry.go:31] will retry after 550.05µs: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.583109 1120074 retry.go:31] will retry after 1.522126ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.585324 1120074 retry.go:31] will retry after 1.028937ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.586455 1120074 retry.go:31] will retry after 3.011096ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.589636 1120074 retry.go:31] will retry after 2.338225ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.592831 1120074 retry.go:31] will retry after 8.028323ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.600986 1120074 retry.go:31] will retry after 8.673611ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.610191 1120074 retry.go:31] will retry after 7.605279ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.618438 1120074 retry.go:31] will retry after 20.513668ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
I0904 06:53:29.639154 1120074 retry.go:31] will retry after 23.453791ms: open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/scheduled-stop-881838/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881838 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-881838 -n scheduled-stop-881838
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-881838
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-881838 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-881838
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-881838: exit status 7 (68.670262ms)

                                                
                                                
-- stdout --
	scheduled-stop-881838
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-881838 -n scheduled-stop-881838
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-881838 -n scheduled-stop-881838: exit status 7 (64.838203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-881838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-881838
--- PASS: TestScheduledStopUnix (115.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (157.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1928504319 start -p running-upgrade-050549 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1928504319 start -p running-upgrade-050549 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m37.235734364s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-050549 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-050549 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.165140316s)
helpers_test.go:175: Cleaning up "running-upgrade-050549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-050549
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-050549: (1.017532982s)
--- PASS: TestRunningBinaryUpgrade (157.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.778226113s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-177439
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-177439: (2.290992299s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-177439 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-177439 status --format={{.Host}}: exit status 7 (65.405273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.942413789s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-177439 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.574993ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-177439] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-177439
	    minikube start -p kubernetes-upgrade-177439 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1774392 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-177439 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-177439 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m35.841035393s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-177439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-177439
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-177439: (1.210930621s)
--- PASS: TestKubernetesUpgrade (344.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324880 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-324880 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (94.744762ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-324880] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (123.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324880 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-324880 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m3.476475172s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-324880 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (123.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-644084 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-644084 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (99.888467ms)

                                                
                                                
-- stdout --
	* [false-644084] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 06:55:42.781167 1155384 out.go:360] Setting OutFile to fd 1 ...
	I0904 06:55:42.781420 1155384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:55:42.781429 1155384 out.go:374] Setting ErrFile to fd 2...
	I0904 06:55:42.781433 1155384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 06:55:42.781634 1155384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-1115845/.minikube/bin
	I0904 06:55:42.782184 1155384 out.go:368] Setting JSON to false
	I0904 06:55:42.783153 1155384 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":16686,"bootTime":1756952257,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 06:55:42.783246 1155384 start.go:140] virtualization: kvm guest
	I0904 06:55:42.785767 1155384 out.go:179] * [false-644084] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 06:55:42.786847 1155384 out.go:179]   - MINIKUBE_LOCATION=21409
	I0904 06:55:42.786852 1155384 notify.go:220] Checking for updates...
	I0904 06:55:42.787950 1155384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 06:55:42.789052 1155384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-1115845/kubeconfig
	I0904 06:55:42.789942 1155384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-1115845/.minikube
	I0904 06:55:42.790971 1155384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 06:55:42.791948 1155384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 06:55:42.793404 1155384 config.go:182] Loaded profile config "NoKubernetes-324880": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:55:42.793518 1155384 config.go:182] Loaded profile config "force-systemd-env-199272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 06:55:42.793601 1155384 config.go:182] Loaded profile config "kubernetes-upgrade-177439": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0904 06:55:42.793715 1155384 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 06:55:42.829183 1155384 out.go:179] * Using the kvm2 driver based on user configuration
	I0904 06:55:42.830106 1155384 start.go:304] selected driver: kvm2
	I0904 06:55:42.830119 1155384 start.go:918] validating driver "kvm2" against <nil>
	I0904 06:55:42.830130 1155384 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 06:55:42.831697 1155384 out.go:203] 
	W0904 06:55:42.832669 1155384 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0904 06:55:42.833588 1155384 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-644084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-644084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-644084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-644084"

                                                
                                                
----------------------- debugLogs end: false-644084 [took: 3.137035174s] --------------------------------
helpers_test.go:175: Cleaning up "false-644084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-644084
--- PASS: TestNetworkPlugins/group/false (3.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3758377255 start -p stopped-upgrade-798275 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3758377255 start -p stopped-upgrade-798275 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m6.747866995s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3758377255 -p stopped-upgrade-798275 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3758377255 -p stopped-upgrade-798275 stop: (2.113567029s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-798275 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0904 06:57:26.528693 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-798275 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.561371649s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (59.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324880 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0904 06:57:04.423107 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-324880 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.320981794s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-324880 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-324880 status -o json: exit status 2 (314.89257ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-324880","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-324880
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-324880: (1.114646684s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (59.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324880 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-324880 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.711987696s)
--- PASS: TestNoKubernetes/serial/Start (49.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-798275
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-798275: (1.145188901s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-324880 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-324880 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.798854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-324880
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-324880: (1.323211507s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestPause/serial/Start (84.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-017566 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-017566 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m24.845377891s)
--- PASS: TestPause/serial/Start (84.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m33.999606097s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m8.171636957s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-644084 "pgrep -a kubelet"
I0904 07:01:59.363680 1120074 config.go:182] Loaded profile config "auto-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-644084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xczl5" [431f49b7-533f-4fd8-be16-24b6f0fee0bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xczl5" [431f49b7-533f-4fd8-be16-24b6f0fee0bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00653042s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7hlpg" [6c0ba74b-f232-42a6-a676-7fd3c042316d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004010092s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0904 07:02:09.599365 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m17.237439s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-644084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-644084 "pgrep -a kubelet"
I0904 07:02:12.726496 1120074 config.go:182] Loaded profile config "kindnet-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-644084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zsbd4" [bc3d85aa-8ff9-4beb-987f-20f933c08fc0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zsbd4" [bc3d85aa-8ff9-4beb-987f-20f933c08fc0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00489565s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-644084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0904 07:02:26.529024 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m32.65067653s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m28.653188183s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (122.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m2.927900376s)
--- PASS: TestNetworkPlugins/group/flannel/Start (122.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-txc6b" [2e44d4ae-8f50-4b9a-9cbf-159397b660eb] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-txc6b" [2e44d4ae-8f50-4b9a-9cbf-159397b660eb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006418541s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-644084 "pgrep -a kubelet"
I0904 07:03:30.676681 1120074 config.go:182] Loaded profile config "calico-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-644084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fvbpm" [3d33dd81-1a18-4c22-ace1-d2129305a440] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fvbpm" [3d33dd81-1a18-4c22-ace1-d2129305a440] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005439522s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-644084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-644084 "pgrep -a kubelet"
I0904 07:03:59.037395 1120074 config.go:182] Loaded profile config "custom-flannel-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-644084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pf9k6" [914ded2b-27c0-41d6-a96c-32f1de9e0131] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pf9k6" [914ded2b-27c0-41d6-a96c-32f1de9e0131] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004825187s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-644084 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m29.735507172s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-644084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-644084 "pgrep -a kubelet"
I0904 07:04:10.093839 1120074 config.go:182] Loaded profile config "enable-default-cni-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-644084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-47g28" [86b76015-4a24-44d7-93ab-b3b0b0f4b226] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-47g28" [86b76015-4a24-44d7-93ab-b3b0b0f4b226] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004873579s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-644084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (103.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-628685 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-628685 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m43.762771551s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (103.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-469304 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-469304 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m56.237514013s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gwvln" [555b6e3e-66fa-4bc7-b9cc-cc7bf47bfd17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004042561s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-644084 "pgrep -a kubelet"
I0904 07:05:06.404226 1120074 config.go:182] Loaded profile config "flannel-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-644084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-85qwm" [07e53578-aca2-4ba1-accf-1d526f0d8958] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-85qwm" [07e53578-aca2-4ba1-accf-1d526f0d8958] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004304432s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-644084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-644084 "pgrep -a kubelet"
I0904 07:05:32.901980 1120074 config.go:182] Loaded profile config "bridge-644084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-644084 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-htsm2" [b6816452-ff17-4c68-85c4-9608ae93057b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-htsm2" [b6816452-ff17-4c68-85c4-9608ae93057b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004946845s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-590305 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-590305 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m28.280173143s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-644084 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-644084 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0904 07:09:40.340148 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-797378 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-797378 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m35.195493707s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-628685 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1e845e03-ae54-415e-900f-8ad3a8b71ad7] Pending
helpers_test.go:352: "busybox" [1e845e03-ae54-415e-900f-8ad3a8b71ad7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1e845e03-ae54-415e-900f-8ad3a8b71ad7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004244749s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-628685 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-628685 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-628685 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092031559s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-628685 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-628685 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-628685 --alsologtostderr -v=3: (1m31.071492475s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-469304 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3c316817-df99-4206-96df-7f4bbd4e48b1] Pending
helpers_test.go:352: "busybox" [3c316817-df99-4206-96df-7f4bbd4e48b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3c316817-df99-4206-96df-7f4bbd4e48b1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.005419908s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-469304 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-469304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-469304 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-469304 --alsologtostderr -v=3
E0904 07:06:59.584803 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:06:59.591219 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:06:59.602587 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:06:59.624085 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:06:59.665513 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:06:59.746960 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:06:59.908517 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:00.230241 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:00.872563 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:02.154671 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-469304 --alsologtostderr -v=3: (1m30.843347107s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-590305 create -f testdata/busybox.yaml
E0904 07:07:04.417665 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3ea38951-73d4-46a4-ab1d-7f87a73d9c3e] Pending
E0904 07:07:04.716147 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [3ea38951-73d4-46a4-ab1d-7f87a73d9c3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0904 07:07:06.500240 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:06.506586 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:06.517949 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:06.539521 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:06.580947 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:06.662441 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:06.824104 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:07.146090 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:07.787481 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:09.069273 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [3ea38951-73d4-46a4-ab1d-7f87a73d9c3e] Running
E0904 07:07:09.837915 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:11.631200 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004781813s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-590305 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-590305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-590305 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-590305 --alsologtostderr -v=3
E0904 07:07:16.753388 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:20.079994 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:26.528708 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/functional-968890/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:07:26.995530 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-590305 --alsologtostderr -v=3: (1m31.2065952s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-797378 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6aa97ca8-9ca1-4524-91bc-3a1c3b22517e] Pending
helpers_test.go:352: "busybox" [6aa97ca8-9ca1-4524-91bc-3a1c3b22517e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6aa97ca8-9ca1-4524-91bc-3a1c3b22517e] Running
E0904 07:07:40.561755 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00366742s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-797378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-797378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-797378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-797378 --alsologtostderr -v=3
E0904 07:07:47.477839 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-797378 --alsologtostderr -v=3: (1m31.652817152s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-628685 -n old-k8s-version-628685
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-628685 -n old-k8s-version-628685: exit status 7 (66.78104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-628685 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-628685 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-628685 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (47.00622353s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-628685 -n old-k8s-version-628685
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-469304 -n no-preload-469304
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-469304 -n no-preload-469304: exit status 7 (76.567706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-469304 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-469304 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 07:08:21.523446 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:24.433108 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:24.439484 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:24.450848 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:24.472215 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:24.513635 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:24.594982 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:24.756615 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:25.078553 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:25.720755 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:27.002135 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:27.492545 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/addons-691233/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:28.440022 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:29.564440 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:34.686428 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-469304 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m0.619194444s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-469304 -n no-preload-469304
E0904 07:09:20.716026 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8m9zg" [af7e776b-c2c4-443e-8b21-093b20321965] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 07:08:44.928675 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8m9zg" [af7e776b-c2c4-443e-8b21-093b20321965] Running
E0904 07:08:59.362009 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:59.368523 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:59.379942 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:59.401429 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:59.443659 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:08:59.525078 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.004456358s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-590305 -n embed-certs-590305
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-590305 -n embed-certs-590305: exit status 7 (80.140051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-590305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-590305 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-590305 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (49.815372692s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-590305 -n embed-certs-590305
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8m9zg" [af7e776b-c2c4-443e-8b21-093b20321965] Running
E0904 07:08:59.686796 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:00.008514 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:00.650898 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:01.932933 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:04.494786 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003681239s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-628685 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-628685 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-628685 --alsologtostderr -v=1
E0904 07:09:05.410537 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-628685 -n old-k8s-version-628685
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-628685 -n old-k8s-version-628685: exit status 2 (289.923786ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-628685 -n old-k8s-version-628685
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-628685 -n old-k8s-version-628685: exit status 2 (290.429878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-628685 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-628685 -n old-k8s-version-628685
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-628685 -n old-k8s-version-628685
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-593394 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 07:09:10.461364 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:10.467764 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:10.479203 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:10.500663 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:10.542265 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:10.624063 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:10.786109 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:11.108104 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:11.750143 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:13.032258 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:15.594572 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-593394 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (50.526507916s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378: exit status 7 (93.601834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-797378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-797378 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 07:09:19.857675 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-797378 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m7.591398485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fp2tw" [ce1c63bc-92ce-4601-8e7f-fee694fb818f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fp2tw" [ce1c63bc-92ce-4601-8e7f-fee694fb818f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004253778s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fp2tw" [ce1c63bc-92ce-4601-8e7f-fee694fb818f] Running
E0904 07:09:30.957543 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003780604s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-469304 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-469304 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-469304 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-469304 -n no-preload-469304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-469304 -n no-preload-469304: exit status 2 (308.845429ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-469304 -n no-preload-469304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-469304 -n no-preload-469304: exit status 2 (339.970779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-469304 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-469304 --alsologtostderr -v=1: (1.107770626s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-469304 -n no-preload-469304
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-469304 -n no-preload-469304
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sz65h" [551b31b7-1c84-4cbb-bf44-3f59dcdf83ac] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003574917s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sz65h" [551b31b7-1c84-4cbb-bf44-3f59dcdf83ac] Running
E0904 07:09:43.445527 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/auto-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:09:46.372876 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/calico-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003450601s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-590305 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-590305 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-590305 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-590305 --alsologtostderr -v=1: (1.128693836s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-590305 -n embed-certs-590305
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-590305 -n embed-certs-590305: exit status 2 (273.92268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-590305 -n embed-certs-590305
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-590305 -n embed-certs-590305: exit status 2 (286.031209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-590305 --alsologtostderr -v=1
E0904 07:09:50.361954 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/kindnet-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-590305 -n embed-certs-590305
E0904 07:09:51.438934 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-590305 -n embed-certs-590305
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-593394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0904 07:10:00.190844 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:00.232368 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:00.316572 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:00.478543 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:00.800703 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:01.442766 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-593394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.373070827s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-593394 --alsologtostderr -v=3
E0904 07:10:02.724466 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:05.286505 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:10.408740 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-593394 --alsologtostderr -v=3: (11.349720008s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-593394 -n newest-cni-593394
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-593394 -n newest-cni-593394: exit status 7 (77.762181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-593394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-593394 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 07:10:20.650801 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:21.302232 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/custom-flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-593394 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (36.396958846s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-593394 -n newest-cni-593394
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-spk2f" [6377e4f2-91a7-4d1d-8dd7-bc8e0ac72c5c] Running
E0904 07:10:32.400652 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/enable-default-cni-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004133289s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-spk2f" [6377e4f2-91a7-4d1d-8dd7-bc8e0ac72c5c] Running
E0904 07:10:33.143409 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:33.150301 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:33.162002 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:33.183459 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:33.225073 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:33.307036 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:33.468925 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:33.790745 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:34.432435 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 07:10:35.715143 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004275179s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-797378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-797378 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-797378 --alsologtostderr -v=1
E0904 07:10:38.276502 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/bridge-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378: exit status 2 (272.809352ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378: exit status 2 (247.7027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-797378 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378
E0904 07:10:41.132959 1120074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-1115845/.minikube/profiles/flannel-644084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378: (1.12368408s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-797378 -n default-k8s-diff-port-797378
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-593394 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-593394 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-593394 -n newest-cni-593394
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-593394 -n newest-cni-593394: exit status 2 (235.376677ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-593394 -n newest-cni-593394
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-593394 -n newest-cni-593394: exit status 2 (245.377582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-593394 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-593394 -n newest-cni-593394
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-593394 -n newest-cni-593394
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                    

Test skip (40/323)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 4.99
267 TestNetworkPlugins/group/cilium 3.59
282 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-691233 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-644084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-644084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-644084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-644084"

                                                
                                                
----------------------- debugLogs end: kubenet-644084 [took: 4.855240665s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-644084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-644084
--- SKIP: TestNetworkPlugins/group/kubenet (4.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-644084 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-644084" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-644084

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-644084" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-644084"

                                                
                                                
----------------------- debugLogs end: cilium-644084 [took: 3.431633153s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-644084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-644084
--- SKIP: TestNetworkPlugins/group/cilium (3.59s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-776688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-776688
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard