Test Report: KVM_Linux_crio 21655

                    
                      f8e963384863fe0b9099940b8c321271fa941d51:2025-09-29:41681
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.08
244 TestPreload 133.73
280 TestPause/serial/SecondStartNoReconfiguration 361.92
x
+
TestAddons/parallel/Ingress (158.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-965504 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-965504 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-965504 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4c1df5c9-5d1d-4ca1-8e0c-f071fa132701] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4c1df5c9-5d1d-4ca1-8e0c-f071fa132701] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003438214s
I0929 11:20:38.794447  369423 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-965504 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.635246742s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-965504 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-965504 -n addons-965504
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 logs -n 25: (1.430479488s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-880021                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-880021 │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │ 29 Sep 25 11:16 UTC │
	│ start   │ --download-only -p binary-mirror-390760 --alsologtostderr --binary-mirror http://127.0.0.1:46057 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-390760 │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │                     │
	│ delete  │ -p binary-mirror-390760                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-390760 │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │ 29 Sep 25 11:16 UTC │
	│ addons  │ disable dashboard -p addons-965504                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │                     │
	│ addons  │ enable dashboard -p addons-965504                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │                     │
	│ start   │ -p addons-965504 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │ 29 Sep 25 11:19 UTC │
	│ addons  │ addons-965504 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:19 UTC │ 29 Sep 25 11:19 UTC │
	│ addons  │ addons-965504 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:19 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ enable headlamp -p addons-965504 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh     │ addons-965504 ssh cat /opt/local-path-provisioner/pvc-4b3cf6fe-7015-406b-bfc8-70f12bca1c19_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:21 UTC │
	│ ip      │ addons-965504 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh     │ addons-965504 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ addons  │ addons-965504 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-965504                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ addons  │ addons-965504 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:21 UTC │ 29 Sep 25 11:21 UTC │
	│ addons  │ addons-965504 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:21 UTC │ 29 Sep 25 11:21 UTC │
	│ ip      │ addons-965504 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-965504        │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │ 29 Sep 25 11:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:16:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:16:25.950760  370114 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:16:25.950908  370114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:16:25.950921  370114 out.go:374] Setting ErrFile to fd 2...
	I0929 11:16:25.950928  370114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:16:25.951159  370114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 11:16:25.951746  370114 out.go:368] Setting JSON to false
	I0929 11:16:25.952673  370114 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3528,"bootTime":1759141058,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:16:25.952770  370114 start.go:140] virtualization: kvm guest
	I0929 11:16:25.954344  370114 out.go:179] * [addons-965504] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:16:25.955441  370114 notify.go:220] Checking for updates...
	I0929 11:16:25.955453  370114 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:16:25.956465  370114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:16:25.957395  370114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 11:16:25.958390  370114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 11:16:25.959428  370114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:16:25.960475  370114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:16:25.961794  370114 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:16:25.992324  370114 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 11:16:25.993411  370114 start.go:304] selected driver: kvm2
	I0929 11:16:25.993432  370114 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:16:25.993445  370114 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:16:25.994429  370114 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:16:25.994537  370114 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:16:26.008704  370114 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:16:26.008745  370114 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:16:26.022173  370114 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:16:26.022231  370114 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:16:26.022490  370114 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:16:26.022529  370114 cni.go:84] Creating CNI manager for ""
	I0929 11:16:26.022571  370114 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:16:26.022580  370114 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:16:26.022633  370114 start.go:348] cluster config:
	{Name:addons-965504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-965504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:16:26.022719  370114 iso.go:125] acquiring lock: {Name:mkf6a4bd1628698e7eb4c42d44aa8328e64686d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:16:26.024775  370114 out.go:179] * Starting "addons-965504" primary control-plane node in "addons-965504" cluster
	I0929 11:16:26.025647  370114 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:16:26.025682  370114 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:16:26.025701  370114 cache.go:58] Caching tarball of preloaded images
	I0929 11:16:26.025781  370114 preload.go:172] Found /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 11:16:26.025791  370114 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:16:26.026147  370114 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/config.json ...
	I0929 11:16:26.026175  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/config.json: {Name:mkd1e1deb575c01813ea3652e0589e1d11ab9ca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:26.026305  370114 start.go:360] acquireMachinesLock for addons-965504: {Name:mk02e688f69f8dfa335651bd732d9d18b60c0952 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 11:16:26.026352  370114 start.go:364] duration metric: took 33.663µs to acquireMachinesLock for "addons-965504"
	I0929 11:16:26.026375  370114 start.go:93] Provisioning new machine with config: &{Name:addons-965504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:addons-965504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:16:26.026448  370114 start.go:125] createHost starting for "" (driver="kvm2")
	I0929 11:16:26.027793  370114 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0929 11:16:26.027944  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:16:26.028002  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:16:26.040644  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0929 11:16:26.041212  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:16:26.042000  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:16:26.042022  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:16:26.042377  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:16:26.042587  370114 main.go:141] libmachine: (addons-965504) Calling .GetMachineName
	I0929 11:16:26.042805  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:26.043003  370114 start.go:159] libmachine.API.Create for "addons-965504" (driver="kvm2")
	I0929 11:16:26.043040  370114 client.go:168] LocalClient.Create starting
	I0929 11:16:26.043081  370114 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem
	I0929 11:16:26.109661  370114 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem
	I0929 11:16:26.326029  370114 main.go:141] libmachine: Running pre-create checks...
	I0929 11:16:26.326056  370114 main.go:141] libmachine: (addons-965504) Calling .PreCreateCheck
	I0929 11:16:26.326563  370114 main.go:141] libmachine: (addons-965504) Calling .GetConfigRaw
	I0929 11:16:26.327167  370114 main.go:141] libmachine: Creating machine...
	I0929 11:16:26.327186  370114 main.go:141] libmachine: (addons-965504) Calling .Create
	I0929 11:16:26.327388  370114 main.go:141] libmachine: (addons-965504) creating domain...
	I0929 11:16:26.327414  370114 main.go:141] libmachine: (addons-965504) creating network...
	I0929 11:16:26.328696  370114 main.go:141] libmachine: (addons-965504) DBG | found existing default network
	I0929 11:16:26.328945  370114 main.go:141] libmachine: (addons-965504) DBG | <network>
	I0929 11:16:26.328983  370114 main.go:141] libmachine: (addons-965504) DBG |   <name>default</name>
	I0929 11:16:26.328997  370114 main.go:141] libmachine: (addons-965504) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 11:16:26.329007  370114 main.go:141] libmachine: (addons-965504) DBG |   <forward mode='nat'>
	I0929 11:16:26.329016  370114 main.go:141] libmachine: (addons-965504) DBG |     <nat>
	I0929 11:16:26.329024  370114 main.go:141] libmachine: (addons-965504) DBG |       <port start='1024' end='65535'/>
	I0929 11:16:26.329035  370114 main.go:141] libmachine: (addons-965504) DBG |     </nat>
	I0929 11:16:26.329046  370114 main.go:141] libmachine: (addons-965504) DBG |   </forward>
	I0929 11:16:26.329052  370114 main.go:141] libmachine: (addons-965504) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 11:16:26.329059  370114 main.go:141] libmachine: (addons-965504) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 11:16:26.329070  370114 main.go:141] libmachine: (addons-965504) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 11:16:26.329076  370114 main.go:141] libmachine: (addons-965504) DBG |     <dhcp>
	I0929 11:16:26.329088  370114 main.go:141] libmachine: (addons-965504) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 11:16:26.329109  370114 main.go:141] libmachine: (addons-965504) DBG |     </dhcp>
	I0929 11:16:26.329121  370114 main.go:141] libmachine: (addons-965504) DBG |   </ip>
	I0929 11:16:26.329128  370114 main.go:141] libmachine: (addons-965504) DBG | </network>
	I0929 11:16:26.329136  370114 main.go:141] libmachine: (addons-965504) DBG | 
	I0929 11:16:26.329717  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:26.329557  370142 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112dd0}
	I0929 11:16:26.329788  370114 main.go:141] libmachine: (addons-965504) DBG | defining private network:
	I0929 11:16:26.329822  370114 main.go:141] libmachine: (addons-965504) DBG | 
	I0929 11:16:26.329832  370114 main.go:141] libmachine: (addons-965504) DBG | <network>
	I0929 11:16:26.329847  370114 main.go:141] libmachine: (addons-965504) DBG |   <name>mk-addons-965504</name>
	I0929 11:16:26.329856  370114 main.go:141] libmachine: (addons-965504) DBG |   <dns enable='no'/>
	I0929 11:16:26.329868  370114 main.go:141] libmachine: (addons-965504) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:16:26.329876  370114 main.go:141] libmachine: (addons-965504) DBG |     <dhcp>
	I0929 11:16:26.329885  370114 main.go:141] libmachine: (addons-965504) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:16:26.329897  370114 main.go:141] libmachine: (addons-965504) DBG |     </dhcp>
	I0929 11:16:26.329904  370114 main.go:141] libmachine: (addons-965504) DBG |   </ip>
	I0929 11:16:26.329912  370114 main.go:141] libmachine: (addons-965504) DBG | </network>
	I0929 11:16:26.329936  370114 main.go:141] libmachine: (addons-965504) DBG | 
	I0929 11:16:26.335394  370114 main.go:141] libmachine: (addons-965504) DBG | creating private network mk-addons-965504 192.168.39.0/24...
	I0929 11:16:26.400531  370114 main.go:141] libmachine: (addons-965504) DBG | private network mk-addons-965504 192.168.39.0/24 created
	I0929 11:16:26.400839  370114 main.go:141] libmachine: (addons-965504) DBG | <network>
	I0929 11:16:26.400859  370114 main.go:141] libmachine: (addons-965504) DBG |   <name>mk-addons-965504</name>
	I0929 11:16:26.400872  370114 main.go:141] libmachine: (addons-965504) setting up store path in /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504 ...
	I0929 11:16:26.400891  370114 main.go:141] libmachine: (addons-965504) building disk image from file:///home/jenkins/minikube-integration/21655-365455/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:16:26.400918  370114 main.go:141] libmachine: (addons-965504) DBG |   <uuid>8c37d396-0265-41b1-a1b3-16555effb6f1</uuid>
	I0929 11:16:26.400937  370114 main.go:141] libmachine: (addons-965504) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0929 11:16:26.400956  370114 main.go:141] libmachine: (addons-965504) DBG |   <mac address='52:54:00:22:ba:e7'/>
	I0929 11:16:26.400990  370114 main.go:141] libmachine: (addons-965504) DBG |   <dns enable='no'/>
	I0929 11:16:26.401001  370114 main.go:141] libmachine: (addons-965504) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0929 11:16:26.401008  370114 main.go:141] libmachine: (addons-965504) DBG |     <dhcp>
	I0929 11:16:26.401018  370114 main.go:141] libmachine: (addons-965504) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0929 11:16:26.401025  370114 main.go:141] libmachine: (addons-965504) DBG |     </dhcp>
	I0929 11:16:26.401037  370114 main.go:141] libmachine: (addons-965504) DBG |   </ip>
	I0929 11:16:26.401043  370114 main.go:141] libmachine: (addons-965504) DBG | </network>
	I0929 11:16:26.401058  370114 main.go:141] libmachine: (addons-965504) DBG | 
	I0929 11:16:26.401077  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:26.400827  370142 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 11:16:26.401109  370114 main.go:141] libmachine: (addons-965504) Downloading /home/jenkins/minikube-integration/21655-365455/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21655-365455/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 11:16:26.681763  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:26.681577  370142 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa...
	I0929 11:16:26.857551  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:26.857418  370142 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/addons-965504.rawdisk...
	I0929 11:16:26.857583  370114 main.go:141] libmachine: (addons-965504) DBG | Writing magic tar header
	I0929 11:16:26.857635  370114 main.go:141] libmachine: (addons-965504) DBG | Writing SSH key tar header
	I0929 11:16:26.857659  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:26.857588  370142 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504 ...
	I0929 11:16:26.857711  370114 main.go:141] libmachine: (addons-965504) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504
	I0929 11:16:26.857745  370114 main.go:141] libmachine: (addons-965504) setting executable bit set on /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504 (perms=drwx------)
	I0929 11:16:26.857762  370114 main.go:141] libmachine: (addons-965504) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455/.minikube/machines
	I0929 11:16:26.857776  370114 main.go:141] libmachine: (addons-965504) setting executable bit set on /home/jenkins/minikube-integration/21655-365455/.minikube/machines (perms=drwxr-xr-x)
	I0929 11:16:26.857788  370114 main.go:141] libmachine: (addons-965504) setting executable bit set on /home/jenkins/minikube-integration/21655-365455/.minikube (perms=drwxr-xr-x)
	I0929 11:16:26.857797  370114 main.go:141] libmachine: (addons-965504) setting executable bit set on /home/jenkins/minikube-integration/21655-365455 (perms=drwxrwxr-x)
	I0929 11:16:26.857805  370114 main.go:141] libmachine: (addons-965504) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 11:16:26.857813  370114 main.go:141] libmachine: (addons-965504) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 11:16:26.857823  370114 main.go:141] libmachine: (addons-965504) defining domain...
	I0929 11:16:26.857866  370114 main.go:141] libmachine: (addons-965504) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 11:16:26.857899  370114 main.go:141] libmachine: (addons-965504) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455
	I0929 11:16:26.857925  370114 main.go:141] libmachine: (addons-965504) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 11:16:26.857943  370114 main.go:141] libmachine: (addons-965504) DBG | checking permissions on dir: /home/jenkins
	I0929 11:16:26.857949  370114 main.go:141] libmachine: (addons-965504) DBG | checking permissions on dir: /home
	I0929 11:16:26.857959  370114 main.go:141] libmachine: (addons-965504) DBG | skipping /home - not owner
	I0929 11:16:26.859202  370114 main.go:141] libmachine: (addons-965504) defining domain using XML: 
	I0929 11:16:26.859222  370114 main.go:141] libmachine: (addons-965504) <domain type='kvm'>
	I0929 11:16:26.859231  370114 main.go:141] libmachine: (addons-965504)   <name>addons-965504</name>
	I0929 11:16:26.859242  370114 main.go:141] libmachine: (addons-965504)   <memory unit='MiB'>4096</memory>
	I0929 11:16:26.859251  370114 main.go:141] libmachine: (addons-965504)   <vcpu>2</vcpu>
	I0929 11:16:26.859268  370114 main.go:141] libmachine: (addons-965504)   <features>
	I0929 11:16:26.859281  370114 main.go:141] libmachine: (addons-965504)     <acpi/>
	I0929 11:16:26.859288  370114 main.go:141] libmachine: (addons-965504)     <apic/>
	I0929 11:16:26.859295  370114 main.go:141] libmachine: (addons-965504)     <pae/>
	I0929 11:16:26.859302  370114 main.go:141] libmachine: (addons-965504)   </features>
	I0929 11:16:26.859332  370114 main.go:141] libmachine: (addons-965504)   <cpu mode='host-passthrough'>
	I0929 11:16:26.859348  370114 main.go:141] libmachine: (addons-965504)   </cpu>
	I0929 11:16:26.859362  370114 main.go:141] libmachine: (addons-965504)   <os>
	I0929 11:16:26.859366  370114 main.go:141] libmachine: (addons-965504)     <type>hvm</type>
	I0929 11:16:26.859371  370114 main.go:141] libmachine: (addons-965504)     <boot dev='cdrom'/>
	I0929 11:16:26.859377  370114 main.go:141] libmachine: (addons-965504)     <boot dev='hd'/>
	I0929 11:16:26.859383  370114 main.go:141] libmachine: (addons-965504)     <bootmenu enable='no'/>
	I0929 11:16:26.859388  370114 main.go:141] libmachine: (addons-965504)   </os>
	I0929 11:16:26.859392  370114 main.go:141] libmachine: (addons-965504)   <devices>
	I0929 11:16:26.859397  370114 main.go:141] libmachine: (addons-965504)     <disk type='file' device='cdrom'>
	I0929 11:16:26.859406  370114 main.go:141] libmachine: (addons-965504)       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/boot2docker.iso'/>
	I0929 11:16:26.859413  370114 main.go:141] libmachine: (addons-965504)       <target dev='hdc' bus='scsi'/>
	I0929 11:16:26.859427  370114 main.go:141] libmachine: (addons-965504)       <readonly/>
	I0929 11:16:26.859433  370114 main.go:141] libmachine: (addons-965504)     </disk>
	I0929 11:16:26.859439  370114 main.go:141] libmachine: (addons-965504)     <disk type='file' device='disk'>
	I0929 11:16:26.859445  370114 main.go:141] libmachine: (addons-965504)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 11:16:26.859453  370114 main.go:141] libmachine: (addons-965504)       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/addons-965504.rawdisk'/>
	I0929 11:16:26.859460  370114 main.go:141] libmachine: (addons-965504)       <target dev='hda' bus='virtio'/>
	I0929 11:16:26.859474  370114 main.go:141] libmachine: (addons-965504)     </disk>
	I0929 11:16:26.859485  370114 main.go:141] libmachine: (addons-965504)     <interface type='network'>
	I0929 11:16:26.859497  370114 main.go:141] libmachine: (addons-965504)       <source network='mk-addons-965504'/>
	I0929 11:16:26.859508  370114 main.go:141] libmachine: (addons-965504)       <model type='virtio'/>
	I0929 11:16:26.859520  370114 main.go:141] libmachine: (addons-965504)     </interface>
	I0929 11:16:26.859530  370114 main.go:141] libmachine: (addons-965504)     <interface type='network'>
	I0929 11:16:26.859539  370114 main.go:141] libmachine: (addons-965504)       <source network='default'/>
	I0929 11:16:26.859548  370114 main.go:141] libmachine: (addons-965504)       <model type='virtio'/>
	I0929 11:16:26.859559  370114 main.go:141] libmachine: (addons-965504)     </interface>
	I0929 11:16:26.859568  370114 main.go:141] libmachine: (addons-965504)     <serial type='pty'>
	I0929 11:16:26.859576  370114 main.go:141] libmachine: (addons-965504)       <target port='0'/>
	I0929 11:16:26.859582  370114 main.go:141] libmachine: (addons-965504)     </serial>
	I0929 11:16:26.859594  370114 main.go:141] libmachine: (addons-965504)     <console type='pty'>
	I0929 11:16:26.859610  370114 main.go:141] libmachine: (addons-965504)       <target type='serial' port='0'/>
	I0929 11:16:26.859620  370114 main.go:141] libmachine: (addons-965504)     </console>
	I0929 11:16:26.859627  370114 main.go:141] libmachine: (addons-965504)     <rng model='virtio'>
	I0929 11:16:26.859641  370114 main.go:141] libmachine: (addons-965504)       <backend model='random'>/dev/random</backend>
	I0929 11:16:26.859649  370114 main.go:141] libmachine: (addons-965504)     </rng>
	I0929 11:16:26.859673  370114 main.go:141] libmachine: (addons-965504)   </devices>
	I0929 11:16:26.859689  370114 main.go:141] libmachine: (addons-965504) </domain>
	I0929 11:16:26.859701  370114 main.go:141] libmachine: (addons-965504) 
	I0929 11:16:26.866005  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:e4:3e:8b in network default
	I0929 11:16:26.866639  370114 main.go:141] libmachine: (addons-965504) starting domain...
	I0929 11:16:26.866659  370114 main.go:141] libmachine: (addons-965504) ensuring networks are active...
	I0929 11:16:26.866671  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:26.867370  370114 main.go:141] libmachine: (addons-965504) Ensuring network default is active
	I0929 11:16:26.867693  370114 main.go:141] libmachine: (addons-965504) Ensuring network mk-addons-965504 is active
	I0929 11:16:26.868276  370114 main.go:141] libmachine: (addons-965504) getting domain XML...
	I0929 11:16:26.869275  370114 main.go:141] libmachine: (addons-965504) DBG | starting domain XML:
	I0929 11:16:26.869297  370114 main.go:141] libmachine: (addons-965504) DBG | <domain type='kvm'>
	I0929 11:16:26.869307  370114 main.go:141] libmachine: (addons-965504) DBG |   <name>addons-965504</name>
	I0929 11:16:26.869315  370114 main.go:141] libmachine: (addons-965504) DBG |   <uuid>ddaa83d6-7903-4b5b-84ba-ab28b2108f14</uuid>
	I0929 11:16:26.869324  370114 main.go:141] libmachine: (addons-965504) DBG |   <memory unit='KiB'>4194304</memory>
	I0929 11:16:26.869341  370114 main.go:141] libmachine: (addons-965504) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0929 11:16:26.869353  370114 main.go:141] libmachine: (addons-965504) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 11:16:26.869361  370114 main.go:141] libmachine: (addons-965504) DBG |   <os>
	I0929 11:16:26.869375  370114 main.go:141] libmachine: (addons-965504) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 11:16:26.869386  370114 main.go:141] libmachine: (addons-965504) DBG |     <boot dev='cdrom'/>
	I0929 11:16:26.869397  370114 main.go:141] libmachine: (addons-965504) DBG |     <boot dev='hd'/>
	I0929 11:16:26.869407  370114 main.go:141] libmachine: (addons-965504) DBG |     <bootmenu enable='no'/>
	I0929 11:16:26.869418  370114 main.go:141] libmachine: (addons-965504) DBG |   </os>
	I0929 11:16:26.869428  370114 main.go:141] libmachine: (addons-965504) DBG |   <features>
	I0929 11:16:26.869437  370114 main.go:141] libmachine: (addons-965504) DBG |     <acpi/>
	I0929 11:16:26.869441  370114 main.go:141] libmachine: (addons-965504) DBG |     <apic/>
	I0929 11:16:26.869447  370114 main.go:141] libmachine: (addons-965504) DBG |     <pae/>
	I0929 11:16:26.869450  370114 main.go:141] libmachine: (addons-965504) DBG |   </features>
	I0929 11:16:26.869459  370114 main.go:141] libmachine: (addons-965504) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 11:16:26.869463  370114 main.go:141] libmachine: (addons-965504) DBG |   <clock offset='utc'/>
	I0929 11:16:26.869471  370114 main.go:141] libmachine: (addons-965504) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 11:16:26.869475  370114 main.go:141] libmachine: (addons-965504) DBG |   <on_reboot>restart</on_reboot>
	I0929 11:16:26.869492  370114 main.go:141] libmachine: (addons-965504) DBG |   <on_crash>destroy</on_crash>
	I0929 11:16:26.869500  370114 main.go:141] libmachine: (addons-965504) DBG |   <devices>
	I0929 11:16:26.869527  370114 main.go:141] libmachine: (addons-965504) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 11:16:26.869548  370114 main.go:141] libmachine: (addons-965504) DBG |     <disk type='file' device='cdrom'>
	I0929 11:16:26.869560  370114 main.go:141] libmachine: (addons-965504) DBG |       <driver name='qemu' type='raw'/>
	I0929 11:16:26.869576  370114 main.go:141] libmachine: (addons-965504) DBG |       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/boot2docker.iso'/>
	I0929 11:16:26.869587  370114 main.go:141] libmachine: (addons-965504) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 11:16:26.869628  370114 main.go:141] libmachine: (addons-965504) DBG |       <readonly/>
	I0929 11:16:26.869643  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 11:16:26.869656  370114 main.go:141] libmachine: (addons-965504) DBG |     </disk>
	I0929 11:16:26.869666  370114 main.go:141] libmachine: (addons-965504) DBG |     <disk type='file' device='disk'>
	I0929 11:16:26.869680  370114 main.go:141] libmachine: (addons-965504) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 11:16:26.869695  370114 main.go:141] libmachine: (addons-965504) DBG |       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/addons-965504.rawdisk'/>
	I0929 11:16:26.869714  370114 main.go:141] libmachine: (addons-965504) DBG |       <target dev='hda' bus='virtio'/>
	I0929 11:16:26.869731  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 11:16:26.869742  370114 main.go:141] libmachine: (addons-965504) DBG |     </disk>
	I0929 11:16:26.869753  370114 main.go:141] libmachine: (addons-965504) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 11:16:26.869764  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 11:16:26.869772  370114 main.go:141] libmachine: (addons-965504) DBG |     </controller>
	I0929 11:16:26.869786  370114 main.go:141] libmachine: (addons-965504) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 11:16:26.869797  370114 main.go:141] libmachine: (addons-965504) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 11:16:26.869823  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 11:16:26.869848  370114 main.go:141] libmachine: (addons-965504) DBG |     </controller>
	I0929 11:16:26.869920  370114 main.go:141] libmachine: (addons-965504) DBG |     <interface type='network'>
	I0929 11:16:26.869945  370114 main.go:141] libmachine: (addons-965504) DBG |       <mac address='52:54:00:54:48:36'/>
	I0929 11:16:26.869961  370114 main.go:141] libmachine: (addons-965504) DBG |       <source network='mk-addons-965504'/>
	I0929 11:16:26.869985  370114 main.go:141] libmachine: (addons-965504) DBG |       <model type='virtio'/>
	I0929 11:16:26.870003  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 11:16:26.870024  370114 main.go:141] libmachine: (addons-965504) DBG |     </interface>
	I0929 11:16:26.870036  370114 main.go:141] libmachine: (addons-965504) DBG |     <interface type='network'>
	I0929 11:16:26.870046  370114 main.go:141] libmachine: (addons-965504) DBG |       <mac address='52:54:00:e4:3e:8b'/>
	I0929 11:16:26.870120  370114 main.go:141] libmachine: (addons-965504) DBG |       <source network='default'/>
	I0929 11:16:26.870139  370114 main.go:141] libmachine: (addons-965504) DBG |       <model type='virtio'/>
	I0929 11:16:26.870152  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 11:16:26.870166  370114 main.go:141] libmachine: (addons-965504) DBG |     </interface>
	I0929 11:16:26.870176  370114 main.go:141] libmachine: (addons-965504) DBG |     <serial type='pty'>
	I0929 11:16:26.870187  370114 main.go:141] libmachine: (addons-965504) DBG |       <target type='isa-serial' port='0'>
	I0929 11:16:26.870200  370114 main.go:141] libmachine: (addons-965504) DBG |         <model name='isa-serial'/>
	I0929 11:16:26.870210  370114 main.go:141] libmachine: (addons-965504) DBG |       </target>
	I0929 11:16:26.870220  370114 main.go:141] libmachine: (addons-965504) DBG |     </serial>
	I0929 11:16:26.870230  370114 main.go:141] libmachine: (addons-965504) DBG |     <console type='pty'>
	I0929 11:16:26.870243  370114 main.go:141] libmachine: (addons-965504) DBG |       <target type='serial' port='0'/>
	I0929 11:16:26.870260  370114 main.go:141] libmachine: (addons-965504) DBG |     </console>
	I0929 11:16:26.870273  370114 main.go:141] libmachine: (addons-965504) DBG |     <input type='mouse' bus='ps2'/>
	I0929 11:16:26.870285  370114 main.go:141] libmachine: (addons-965504) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 11:16:26.870296  370114 main.go:141] libmachine: (addons-965504) DBG |     <audio id='1' type='none'/>
	I0929 11:16:26.870313  370114 main.go:141] libmachine: (addons-965504) DBG |     <memballoon model='virtio'>
	I0929 11:16:26.870325  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 11:16:26.870350  370114 main.go:141] libmachine: (addons-965504) DBG |     </memballoon>
	I0929 11:16:26.870363  370114 main.go:141] libmachine: (addons-965504) DBG |     <rng model='virtio'>
	I0929 11:16:26.870375  370114 main.go:141] libmachine: (addons-965504) DBG |       <backend model='random'>/dev/random</backend>
	I0929 11:16:26.870390  370114 main.go:141] libmachine: (addons-965504) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 11:16:26.870399  370114 main.go:141] libmachine: (addons-965504) DBG |     </rng>
	I0929 11:16:26.870422  370114 main.go:141] libmachine: (addons-965504) DBG |   </devices>
	I0929 11:16:26.870454  370114 main.go:141] libmachine: (addons-965504) DBG | </domain>
	I0929 11:16:26.870468  370114 main.go:141] libmachine: (addons-965504) DBG | 
	I0929 11:16:28.158546  370114 main.go:141] libmachine: (addons-965504) waiting for domain to start...
	I0929 11:16:28.159910  370114 main.go:141] libmachine: (addons-965504) domain is now running
	I0929 11:16:28.159933  370114 main.go:141] libmachine: (addons-965504) waiting for IP...
	I0929 11:16:28.160772  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:28.161347  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:28.161372  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:28.161598  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:28.161679  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:28.161619  370142 retry.go:31] will retry after 283.113238ms: waiting for domain to come up
	I0929 11:16:28.446152  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:28.446650  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:28.446683  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:28.446892  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:28.446922  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:28.446853  370142 retry.go:31] will retry after 332.602546ms: waiting for domain to come up
	I0929 11:16:28.781236  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:28.781743  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:28.781775  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:28.782004  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:28.782027  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:28.781952  370142 retry.go:31] will retry after 293.122377ms: waiting for domain to come up
	I0929 11:16:29.076520  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:29.076937  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:29.076984  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:29.077235  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:29.077260  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:29.077216  370142 retry.go:31] will retry after 453.456239ms: waiting for domain to come up
	I0929 11:16:29.532055  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:29.532523  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:29.532555  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:29.532770  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:29.532797  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:29.532743  370142 retry.go:31] will retry after 525.844771ms: waiting for domain to come up
	I0929 11:16:30.060660  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:30.061122  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:30.061148  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:30.061433  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:30.061488  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:30.061428  370142 retry.go:31] will retry after 640.536333ms: waiting for domain to come up
	I0929 11:16:30.703455  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:30.704040  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:30.704091  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:30.704367  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:30.704434  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:30.704359  370142 retry.go:31] will retry after 810.04562ms: waiting for domain to come up
	I0929 11:16:31.516266  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:31.516722  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:31.516744  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:31.517035  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:31.517074  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:31.517024  370142 retry.go:31] will retry after 1.309661119s: waiting for domain to come up
	I0929 11:16:32.828721  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:32.829292  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:32.829311  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:32.829580  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:32.829676  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:32.829589  370142 retry.go:31] will retry after 1.747425496s: waiting for domain to come up
	I0929 11:16:34.579876  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:34.580435  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:34.580461  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:34.580893  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:34.580923  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:34.580844  370142 retry.go:31] will retry after 1.678639199s: waiting for domain to come up
	I0929 11:16:36.261830  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:36.262508  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:36.262546  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:36.262820  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:36.262869  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:36.262795  370142 retry.go:31] will retry after 2.303489058s: waiting for domain to come up
	I0929 11:16:38.569414  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:38.569845  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:38.569872  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:38.570183  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:38.570212  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:38.570148  370142 retry.go:31] will retry after 2.661004775s: waiting for domain to come up
	I0929 11:16:41.232663  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:41.233199  370114 main.go:141] libmachine: (addons-965504) DBG | no network interface addresses found for domain addons-965504 (source=lease)
	I0929 11:16:41.233229  370114 main.go:141] libmachine: (addons-965504) DBG | trying to list again with source=arp
	I0929 11:16:41.233431  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find current IP address of domain addons-965504 in network mk-addons-965504 (interfaces detected: [])
	I0929 11:16:41.233501  370114 main.go:141] libmachine: (addons-965504) DBG | I0929 11:16:41.233430  370142 retry.go:31] will retry after 3.261046302s: waiting for domain to come up
	I0929 11:16:44.498010  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.498564  370114 main.go:141] libmachine: (addons-965504) found domain IP: 192.168.39.82
	I0929 11:16:44.498589  370114 main.go:141] libmachine: (addons-965504) reserving static IP address...
	I0929 11:16:44.498600  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has current primary IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.499107  370114 main.go:141] libmachine: (addons-965504) DBG | unable to find host DHCP lease matching {name: "addons-965504", mac: "52:54:00:54:48:36", ip: "192.168.39.82"} in network mk-addons-965504
	I0929 11:16:44.669239  370114 main.go:141] libmachine: (addons-965504) reserved static IP address 192.168.39.82 for domain addons-965504
	I0929 11:16:44.669272  370114 main.go:141] libmachine: (addons-965504) waiting for SSH...
	I0929 11:16:44.669281  370114 main.go:141] libmachine: (addons-965504) DBG | Getting to WaitForSSH function...
	I0929 11:16:44.672302  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.672785  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:48:36}
	I0929 11:16:44.672821  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.673125  370114 main.go:141] libmachine: (addons-965504) DBG | Using SSH client type: external
	I0929 11:16:44.673151  370114 main.go:141] libmachine: (addons-965504) DBG | Using SSH private key: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa (-rw-------)
	I0929 11:16:44.673192  370114 main.go:141] libmachine: (addons-965504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 11:16:44.673214  370114 main.go:141] libmachine: (addons-965504) DBG | About to run SSH command:
	I0929 11:16:44.673238  370114 main.go:141] libmachine: (addons-965504) DBG | exit 0
	I0929 11:16:44.806642  370114 main.go:141] libmachine: (addons-965504) DBG | SSH cmd err, output: <nil>: 
	I0929 11:16:44.806942  370114 main.go:141] libmachine: (addons-965504) domain creation complete
	I0929 11:16:44.807359  370114 main.go:141] libmachine: (addons-965504) Calling .GetConfigRaw
	I0929 11:16:44.808019  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:44.808224  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:44.808386  370114 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 11:16:44.808404  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:16:44.809807  370114 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 11:16:44.809821  370114 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 11:16:44.809826  370114 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 11:16:44.809832  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:44.812257  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.812675  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:44.812695  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.812871  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:44.813119  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:44.813305  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:44.813453  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:44.813623  370114 main.go:141] libmachine: Using SSH client type: native
	I0929 11:16:44.813936  370114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0929 11:16:44.813951  370114 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 11:16:44.922654  370114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:16:44.922683  370114 main.go:141] libmachine: Detecting the provisioner...
	I0929 11:16:44.922694  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:44.925773  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.926237  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:44.926269  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:44.926397  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:44.926608  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:44.926783  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:44.926990  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:44.927187  370114 main.go:141] libmachine: Using SSH client type: native
	I0929 11:16:44.927477  370114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0929 11:16:44.927494  370114 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 11:16:45.038258  370114 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 11:16:45.038363  370114 main.go:141] libmachine: found compatible host: buildroot
	I0929 11:16:45.038390  370114 main.go:141] libmachine: Provisioning with buildroot...
	I0929 11:16:45.038404  370114 main.go:141] libmachine: (addons-965504) Calling .GetMachineName
	I0929 11:16:45.038739  370114 buildroot.go:166] provisioning hostname "addons-965504"
	I0929 11:16:45.038768  370114 main.go:141] libmachine: (addons-965504) Calling .GetMachineName
	I0929 11:16:45.039010  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:45.042196  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.042634  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:45.042665  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.042805  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:45.042997  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:45.043194  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:45.043366  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:45.043550  370114 main.go:141] libmachine: Using SSH client type: native
	I0929 11:16:45.043776  370114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0929 11:16:45.043790  370114 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-965504 && echo "addons-965504" | sudo tee /etc/hostname
	I0929 11:16:45.182647  370114 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-965504
	
	I0929 11:16:45.182682  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:45.185405  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.185789  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:45.185823  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.186032  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:45.186231  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:45.186412  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:45.186564  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:45.186784  370114 main.go:141] libmachine: Using SSH client type: native
	I0929 11:16:45.187115  370114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0929 11:16:45.187138  370114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-965504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-965504/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-965504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:16:45.306660  370114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:16:45.306692  370114 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21655-365455/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-365455/.minikube}
	I0929 11:16:45.306731  370114 buildroot.go:174] setting up certificates
	I0929 11:16:45.306744  370114 provision.go:84] configureAuth start
	I0929 11:16:45.306754  370114 main.go:141] libmachine: (addons-965504) Calling .GetMachineName
	I0929 11:16:45.307127  370114 main.go:141] libmachine: (addons-965504) Calling .GetIP
	I0929 11:16:45.310220  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.310657  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:45.310693  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.310878  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:45.314774  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.315227  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:45.315274  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.315484  370114 provision.go:143] copyHostCerts
	I0929 11:16:45.315590  370114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem (1078 bytes)
	I0929 11:16:45.315707  370114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem (1123 bytes)
	I0929 11:16:45.315803  370114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem (1675 bytes)
	I0929 11:16:45.315857  370114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem org=jenkins.addons-965504 san=[127.0.0.1 192.168.39.82 addons-965504 localhost minikube]
	I0929 11:16:45.563287  370114 provision.go:177] copyRemoteCerts
	I0929 11:16:45.563356  370114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:16:45.563383  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:45.566199  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.566554  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:45.566633  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.566716  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:45.566895  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:45.567117  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:45.567274  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:16:45.654647  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 11:16:45.684354  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:16:45.713824  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 11:16:45.743414  370114 provision.go:87] duration metric: took 436.647749ms to configureAuth
	I0929 11:16:45.743455  370114 buildroot.go:189] setting minikube options for container-runtime
	I0929 11:16:45.743687  370114 config.go:182] Loaded profile config "addons-965504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:16:45.743815  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:45.746802  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.747215  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:45.747244  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:45.747443  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:45.747670  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:45.747862  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:45.748040  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:45.748233  370114 main.go:141] libmachine: Using SSH client type: native
	I0929 11:16:45.748438  370114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0929 11:16:45.748451  370114 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:16:46.014512  370114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:16:46.014544  370114 main.go:141] libmachine: Checking connection to Docker...
	I0929 11:16:46.014553  370114 main.go:141] libmachine: (addons-965504) Calling .GetURL
	I0929 11:16:46.016007  370114 main.go:141] libmachine: (addons-965504) DBG | using libvirt version 8000000
	I0929 11:16:46.018826  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.019241  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:46.019277  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.019461  370114 main.go:141] libmachine: Docker is up and running!
	I0929 11:16:46.019475  370114 main.go:141] libmachine: Reticulating splines...
	I0929 11:16:46.019484  370114 client.go:171] duration metric: took 19.976435199s to LocalClient.Create
	I0929 11:16:46.019514  370114 start.go:167] duration metric: took 19.976512605s to libmachine.API.Create "addons-965504"
	I0929 11:16:46.019527  370114 start.go:293] postStartSetup for "addons-965504" (driver="kvm2")
	I0929 11:16:46.019536  370114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:16:46.019553  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:46.019817  370114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:16:46.019841  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:46.022060  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.022470  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:46.022498  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.022720  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:46.022918  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:46.023080  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:46.023210  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:16:46.110795  370114 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:16:46.115467  370114 info.go:137] Remote host: Buildroot 2025.02
	I0929 11:16:46.115498  370114 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/addons for local assets ...
	I0929 11:16:46.115585  370114 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/files for local assets ...
	I0929 11:16:46.115620  370114 start.go:296] duration metric: took 96.081528ms for postStartSetup
	I0929 11:16:46.115671  370114 main.go:141] libmachine: (addons-965504) Calling .GetConfigRaw
	I0929 11:16:46.116341  370114 main.go:141] libmachine: (addons-965504) Calling .GetIP
	I0929 11:16:46.119400  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.119835  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:46.119875  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.120128  370114 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/config.json ...
	I0929 11:16:46.120346  370114 start.go:128] duration metric: took 20.093886055s to createHost
	I0929 11:16:46.120377  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:46.122738  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.123228  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:46.123259  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.123471  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:46.123677  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:46.123830  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:46.123958  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:46.124122  370114 main.go:141] libmachine: Using SSH client type: native
	I0929 11:16:46.124330  370114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I0929 11:16:46.124340  370114 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 11:16:46.236487  370114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759144606.211519820
	
	I0929 11:16:46.236514  370114 fix.go:216] guest clock: 1759144606.211519820
	I0929 11:16:46.236523  370114 fix.go:229] Guest: 2025-09-29 11:16:46.21151982 +0000 UTC Remote: 2025-09-29 11:16:46.12036143 +0000 UTC m=+20.205162545 (delta=91.15839ms)
	I0929 11:16:46.236579  370114 fix.go:200] guest clock delta is within tolerance: 91.15839ms
	I0929 11:16:46.236588  370114 start.go:83] releasing machines lock for "addons-965504", held for 20.210223804s
	I0929 11:16:46.236613  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:46.236930  370114 main.go:141] libmachine: (addons-965504) Calling .GetIP
	I0929 11:16:46.239878  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.240418  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:46.240451  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.240634  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:46.241203  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:46.241398  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:16:46.241512  370114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:16:46.241573  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:46.241637  370114 ssh_runner.go:195] Run: cat /version.json
	I0929 11:16:46.241674  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:16:46.244891  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.245024  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.245299  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:46.245330  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.245357  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:46.245374  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:46.245485  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:46.245686  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:16:46.245695  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:46.245892  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:16:46.245897  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:46.246060  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:16:46.246071  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:16:46.246205  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:16:46.361946  370114 ssh_runner.go:195] Run: systemctl --version
	I0929 11:16:46.368372  370114 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:16:46.524924  370114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 11:16:46.531879  370114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 11:16:46.531957  370114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:16:46.551089  370114 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 11:16:46.551123  370114 start.go:495] detecting cgroup driver to use...
	I0929 11:16:46.551197  370114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:16:46.569889  370114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:16:46.586310  370114 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:16:46.586385  370114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:16:46.603434  370114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:16:46.619570  370114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:16:46.763744  370114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:16:46.973037  370114 docker.go:234] disabling docker service ...
	I0929 11:16:46.973133  370114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:16:46.990498  370114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:16:47.005564  370114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:16:47.161749  370114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:16:47.304186  370114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:16:47.320341  370114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:16:47.342380  370114 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:16:47.342453  370114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:16:47.354468  370114 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 11:16:47.354598  370114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:16:47.366673  370114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:16:47.378953  370114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:16:47.391496  370114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:16:47.404400  370114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:16:47.416121  370114 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:16:47.435858  370114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:16:47.447961  370114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:16:47.458092  370114 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 11:16:47.458166  370114 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 11:16:47.476867  370114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:16:47.488031  370114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:16:47.627292  370114 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:16:47.739958  370114 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:16:47.740065  370114 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:16:47.745473  370114 start.go:563] Will wait 60s for crictl version
	I0929 11:16:47.745545  370114 ssh_runner.go:195] Run: which crictl
	I0929 11:16:47.749464  370114 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:16:47.790433  370114 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 11:16:47.790548  370114 ssh_runner.go:195] Run: crio --version
	I0929 11:16:47.818312  370114 ssh_runner.go:195] Run: crio --version
	I0929 11:16:47.848339  370114 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 11:16:47.849577  370114 main.go:141] libmachine: (addons-965504) Calling .GetIP
	I0929 11:16:47.852751  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:47.853171  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:16:47.853201  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:16:47.853432  370114 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 11:16:47.857874  370114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:16:47.872373  370114 kubeadm.go:875] updating cluster {Name:addons-965504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-965
504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:16:47.872475  370114 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:16:47.872517  370114 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:16:47.905159  370114 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0929 11:16:47.905230  370114 ssh_runner.go:195] Run: which lz4
	I0929 11:16:47.909296  370114 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 11:16:47.914050  370114 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 11:16:47.914079  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0929 11:16:49.331146  370114 crio.go:462] duration metric: took 1.421891186s to copy over tarball
	I0929 11:16:49.331248  370114 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 11:16:51.007505  370114 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.676208666s)
	I0929 11:16:51.007536  370114 crio.go:469] duration metric: took 1.676358999s to extract the tarball
	I0929 11:16:51.007545  370114 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 11:16:51.048023  370114 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:16:51.094104  370114 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:16:51.094133  370114 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:16:51.094143  370114 kubeadm.go:926] updating node { 192.168.39.82 8443 v1.34.0 crio true true} ...
	I0929 11:16:51.094248  370114 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-965504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-965504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:16:51.094320  370114 ssh_runner.go:195] Run: crio config
	I0929 11:16:51.138566  370114 cni.go:84] Creating CNI manager for ""
	I0929 11:16:51.138595  370114 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:16:51.138609  370114 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:16:51.138629  370114 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-965504 NodeName:addons-965504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:16:51.138749  370114 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-965504"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:16:51.138811  370114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:16:51.150640  370114 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:16:51.150744  370114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:16:51.162298  370114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 11:16:51.182315  370114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:16:51.203474  370114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0929 11:16:51.224065  370114 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I0929 11:16:51.228427  370114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:16:51.242820  370114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:16:51.380498  370114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:16:51.415522  370114 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504 for IP: 192.168.39.82
	I0929 11:16:51.415560  370114 certs.go:194] generating shared ca certs ...
	I0929 11:16:51.415584  370114 certs.go:226] acquiring lock for ca certs: {Name:mk0b410c7c5424a4463d6cf6464227ce4eef65e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.415784  370114 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key
	I0929 11:16:51.565074  370114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt ...
	I0929 11:16:51.565107  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt: {Name:mk2260936e389b2ed8f6a7c94b60909035c91cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.565288  370114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key ...
	I0929 11:16:51.565299  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key: {Name:mkc797b454d9c0e9904dd805923cca4d53d9ad11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.565380  370114 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key
	I0929 11:16:51.700706  370114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.crt ...
	I0929 11:16:51.700736  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.crt: {Name:mk7296bea08bcd0736c580299b77194d19f67aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.700904  370114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key ...
	I0929 11:16:51.700919  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key: {Name:mk5ae227790d66bc398d40ebed4ab6645f772940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.701003  370114 certs.go:256] generating profile certs ...
	I0929 11:16:51.701061  370114 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.key
	I0929 11:16:51.701108  370114 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt with IP's: []
	I0929 11:16:51.753834  370114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt ...
	I0929 11:16:51.753859  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: {Name:mkbe68c52f2f615c5bd82ef7aa917d2721e79fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.754028  370114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.key ...
	I0929 11:16:51.754039  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.key: {Name:mk1935049801ee18a3d0b5f53236536397f10a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.754112  370114 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.key.91c6f72c
	I0929 11:16:51.754130  370114 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.crt.91c6f72c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.82]
	I0929 11:16:51.937523  370114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.crt.91c6f72c ...
	I0929 11:16:51.937557  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.crt.91c6f72c: {Name:mkf5b0ba84ae95e037793b03d97ab3fb73fde84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.937720  370114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.key.91c6f72c ...
	I0929 11:16:51.937733  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.key.91c6f72c: {Name:mk8128c2b1929990edc0cfb4423bba094beecdbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:51.937805  370114 certs.go:381] copying /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.crt.91c6f72c -> /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.crt
	I0929 11:16:51.937889  370114 certs.go:385] copying /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.key.91c6f72c -> /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.key
	I0929 11:16:51.937938  370114 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.key
	I0929 11:16:51.937959  370114 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.crt with IP's: []
	I0929 11:16:52.970464  370114 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.crt ...
	I0929 11:16:52.970499  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.crt: {Name:mk172e60e72a6354c515fcc60640522172e2774f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:52.970683  370114 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.key ...
	I0929 11:16:52.970699  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.key: {Name:mk50083d60b7d402704a4b256a11ebba92dec338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:16:52.970893  370114 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 11:16:52.970928  370114 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem (1078 bytes)
	I0929 11:16:52.970951  370114 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:16:52.970984  370114 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem (1675 bytes)
	I0929 11:16:52.971575  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:16:53.014413  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 11:16:53.052197  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:16:53.082532  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:16:53.112353  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:16:53.141888  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:16:53.171734  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:16:53.201786  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 11:16:53.231514  370114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:16:53.260908  370114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:16:53.280800  370114 ssh_runner.go:195] Run: openssl version
	I0929 11:16:53.287260  370114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:16:53.300937  370114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:16:53.306255  370114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:16:53.306315  370114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:16:53.313833  370114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:16:53.326860  370114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:16:53.331446  370114 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:16:53.331505  370114 kubeadm.go:392] StartCluster: {Name:addons-965504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-965504
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:16:53.331599  370114 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:16:53.331683  370114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:16:53.369360  370114 cri.go:89] found id: ""
	I0929 11:16:53.369469  370114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:16:53.381335  370114 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:16:53.393470  370114 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:16:53.404953  370114 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:16:53.404990  370114 kubeadm.go:157] found existing configuration files:
	
	I0929 11:16:53.405047  370114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:16:53.415725  370114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:16:53.415797  370114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:16:53.427314  370114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:16:53.437697  370114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:16:53.437755  370114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:16:53.449406  370114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:16:53.460871  370114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:16:53.460924  370114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:16:53.474421  370114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:16:53.487260  370114 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:16:53.487335  370114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:16:53.500965  370114 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 11:16:53.549747  370114 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:16:53.549846  370114 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:16:53.672553  370114 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:16:53.672710  370114 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:16:53.672837  370114 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:16:53.689136  370114 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:16:53.814603  370114 out.go:252]   - Generating certificates and keys ...
	I0929 11:16:53.814755  370114 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:16:53.814843  370114 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:16:54.110872  370114 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:16:54.578091  370114 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:16:54.765996  370114 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:16:55.021969  370114 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:16:55.307282  370114 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:16:55.307434  370114 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-965504 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I0929 11:16:55.452076  370114 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:16:55.452225  370114 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-965504 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I0929 11:16:55.976865  370114 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:16:56.086638  370114 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:16:56.477551  370114 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:16:56.477670  370114 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:16:56.689130  370114 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:16:56.860373  370114 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:16:57.249206  370114 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:16:57.366317  370114 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:16:57.717494  370114 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:16:57.718031  370114 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:16:57.720292  370114 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:16:57.722270  370114 out.go:252]   - Booting up control plane ...
	I0929 11:16:57.722403  370114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:16:57.722512  370114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:16:57.722609  370114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:16:57.738890  370114 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:16:57.739116  370114 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:16:57.745881  370114 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:16:57.746046  370114 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:16:57.746106  370114 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:16:57.908777  370114 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:16:57.908998  370114 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:16:58.408484  370114 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.450945ms
	I0929 11:16:58.411404  370114 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:16:58.411570  370114 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.82:8443/livez
	I0929 11:16:58.411727  370114 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:16:58.411848  370114 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:17:03.108470  370114 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.699088053s
	I0929 11:17:03.452401  370114 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.043568166s
	I0929 11:17:05.410361  370114 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.001853429s
	I0929 11:17:05.437158  370114 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:17:05.456085  370114 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:17:05.478227  370114 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:17:05.478518  370114 kubeadm.go:310] [mark-control-plane] Marking the node addons-965504 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:17:05.490375  370114 kubeadm.go:310] [bootstrap-token] Using token: iqvnsv.kshl14ac750u9t02
	I0929 11:17:05.491779  370114 out.go:252]   - Configuring RBAC rules ...
	I0929 11:17:05.491919  370114 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:17:05.503006  370114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:17:05.528327  370114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:17:05.538021  370114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:17:05.542549  370114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:17:05.546845  370114 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:17:05.817756  370114 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:17:06.256326  370114 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:17:06.817695  370114 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:17:06.819994  370114 kubeadm.go:310] 
	I0929 11:17:06.820075  370114 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:17:06.820086  370114 kubeadm.go:310] 
	I0929 11:17:06.820208  370114 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:17:06.820226  370114 kubeadm.go:310] 
	I0929 11:17:06.820269  370114 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:17:06.821819  370114 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:17:06.821910  370114 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:17:06.821923  370114 kubeadm.go:310] 
	I0929 11:17:06.821965  370114 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:17:06.821979  370114 kubeadm.go:310] 
	I0929 11:17:06.822018  370114 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:17:06.822024  370114 kubeadm.go:310] 
	I0929 11:17:06.822087  370114 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:17:06.822166  370114 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:17:06.822272  370114 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:17:06.822285  370114 kubeadm.go:310] 
	I0929 11:17:06.822390  370114 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:17:06.822508  370114 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:17:06.822524  370114 kubeadm.go:310] 
	I0929 11:17:06.822645  370114 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iqvnsv.kshl14ac750u9t02 \
	I0929 11:17:06.822819  370114 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6584cfb39d6d521de94c50ba68c73bacf142e1b11809c32d2bb4689966c9f242 \
	I0929 11:17:06.822849  370114 kubeadm.go:310] 	--control-plane 
	I0929 11:17:06.822858  370114 kubeadm.go:310] 
	I0929 11:17:06.823020  370114 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:17:06.823031  370114 kubeadm.go:310] 
	I0929 11:17:06.823152  370114 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iqvnsv.kshl14ac750u9t02 \
	I0929 11:17:06.823309  370114 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6584cfb39d6d521de94c50ba68c73bacf142e1b11809c32d2bb4689966c9f242 
	I0929 11:17:06.825662  370114 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:17:06.825712  370114 cni.go:84] Creating CNI manager for ""
	I0929 11:17:06.825725  370114 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:17:06.827376  370114 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 11:17:06.828623  370114 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 11:17:06.841029  370114 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 11:17:06.862880  370114 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:17:06.862954  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:06.863013  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-965504 minikube.k8s.io/updated_at=2025_09_29T11_17_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf minikube.k8s.io/name=addons-965504 minikube.k8s.io/primary=true
	I0929 11:17:07.031084  370114 ops.go:34] apiserver oom_adj: -16
	I0929 11:17:07.031147  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:07.531901  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:08.032219  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:08.531288  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:09.031633  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:09.532233  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:10.032252  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:10.531481  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:11.031635  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:11.531954  370114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:17:11.745701  370114 kubeadm.go:1105] duration metric: took 4.882825008s to wait for elevateKubeSystemPrivileges
	I0929 11:17:11.745747  370114 kubeadm.go:394] duration metric: took 18.41424794s to StartCluster
	I0929 11:17:11.745784  370114 settings.go:142] acquiring lock: {Name:mk1143e9344364f35458338f5354c9162487b91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:17:11.745930  370114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 11:17:11.746401  370114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/kubeconfig: {Name:mkd302531ec3362506563544f43831c9980ac365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:17:11.746638  370114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:17:11.746688  370114 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:17:11.746791  370114 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:17:11.746935  370114 config.go:182] Loaded profile config "addons-965504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:17:11.746952  370114 addons.go:69] Setting cloud-spanner=true in profile "addons-965504"
	I0929 11:17:11.746958  370114 addons.go:69] Setting storage-provisioner=true in profile "addons-965504"
	I0929 11:17:11.746989  370114 addons.go:238] Setting addon cloud-spanner=true in "addons-965504"
	I0929 11:17:11.747005  370114 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-965504"
	I0929 11:17:11.746952  370114 addons.go:69] Setting registry-creds=true in profile "addons-965504"
	I0929 11:17:11.746994  370114 addons.go:69] Setting registry=true in profile "addons-965504"
	I0929 11:17:11.747022  370114 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-965504"
	I0929 11:17:11.747031  370114 addons.go:238] Setting addon registry-creds=true in "addons-965504"
	I0929 11:17:11.747016  370114 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-965504"
	I0929 11:17:11.747041  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747045  370114 addons.go:69] Setting ingress-dns=true in profile "addons-965504"
	I0929 11:17:11.747032  370114 addons.go:69] Setting default-storageclass=true in profile "addons-965504"
	I0929 11:17:11.747058  370114 addons.go:238] Setting addon ingress-dns=true in "addons-965504"
	I0929 11:17:11.747063  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747087  370114 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-965504"
	I0929 11:17:11.747128  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747164  370114 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-965504"
	I0929 11:17:11.747226  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747610  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.747651  370114 addons.go:69] Setting volumesnapshots=true in profile "addons-965504"
	I0929 11:17:11.747686  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.747036  370114 addons.go:238] Setting addon registry=true in "addons-965504"
	I0929 11:17:11.747761  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.747792  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747000  370114 addons.go:238] Setting addon storage-provisioner=true in "addons-965504"
	I0929 11:17:11.747874  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747693  370114 addons.go:238] Setting addon volumesnapshots=true in "addons-965504"
	I0929 11:17:11.747913  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747040  370114 addons.go:69] Setting ingress=true in profile "addons-965504"
	I0929 11:17:11.748189  370114 addons.go:238] Setting addon ingress=true in "addons-965504"
	I0929 11:17:11.748206  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.748208  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.748216  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.748239  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.748258  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.747609  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.748308  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.748340  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.748360  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.747661  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.748522  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.747627  370114 addons.go:69] Setting inspektor-gadget=true in profile "addons-965504"
	I0929 11:17:11.748638  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.748599  370114 addons.go:238] Setting addon inspektor-gadget=true in "addons-965504"
	I0929 11:17:11.748730  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.748743  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.746930  370114 addons.go:69] Setting gcp-auth=true in profile "addons-965504"
	I0929 11:17:11.748874  370114 mustload.go:65] Loading cluster: addons-965504
	I0929 11:17:11.747632  370114 addons.go:69] Setting metrics-server=true in profile "addons-965504"
	I0929 11:17:11.749253  370114 addons.go:238] Setting addon metrics-server=true in "addons-965504"
	I0929 11:17:11.749281  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.749664  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.749684  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.747631  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.750153  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.750229  370114 out.go:179] * Verifying Kubernetes components...
	I0929 11:17:11.746946  370114 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-965504"
	I0929 11:17:11.750420  370114 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-965504"
	I0929 11:17:11.750449  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.750908  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.750929  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.747635  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.751112  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.747636  370114 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-965504"
	I0929 11:17:11.751788  370114 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-965504"
	I0929 11:17:11.751816  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747642  370114 addons.go:69] Setting volcano=true in profile "addons-965504"
	I0929 11:17:11.752045  370114 addons.go:238] Setting addon volcano=true in "addons-965504"
	I0929 11:17:11.752074  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.747672  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.752125  370114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:17:11.746931  370114 addons.go:69] Setting yakd=true in profile "addons-965504"
	I0929 11:17:11.752255  370114 addons.go:238] Setting addon yakd=true in "addons-965504"
	I0929 11:17:11.752259  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.752278  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.752282  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.752437  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.752465  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.757447  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.757480  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.758701  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.759372  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.759321  370114 config.go:182] Loaded profile config "addons-965504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:17:11.760046  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.760074  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.782344  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0929 11:17:11.783669  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39081
	I0929 11:17:11.785218  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0929 11:17:11.785851  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.786673  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0929 11:17:11.786962  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.786992  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.787381  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.787507  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.787910  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.788077  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.788100  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.788940  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.789037  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.789093  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0929 11:17:11.789350  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.789375  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.789675  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.789696  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.789769  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.790516  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.790609  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0929 11:17:11.790662  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.790829  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.790963  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.791420  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.791437  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.791597  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.791928  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.791832  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I0929 11:17:11.792124  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.792514  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.792902  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.793052  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.793209  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.793286  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.793529  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.793681  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.793939  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.794022  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.794449  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.795025  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.795053  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.796242  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
	I0929 11:17:11.799017  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I0929 11:17:11.799039  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0929 11:17:11.799060  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.799018  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.799607  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.799628  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.799793  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.799795  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.799808  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.800222  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.800235  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.800909  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.800935  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.801341  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.801868  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.801913  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.802862  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.802934  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.803194  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.803946  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.804325  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.809507  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0929 11:17:11.811178  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.811235  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.811508  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.812268  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.812308  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.820582  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.821677  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0929 11:17:11.828078  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0929 11:17:11.828096  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45249
	I0929 11:17:11.828077  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.828089  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0929 11:17:11.828732  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.828761  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.828793  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.828830  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.828847  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.828949  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.830178  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.830200  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.830338  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.830213  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.830223  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.830686  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.830920  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.831274  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.831307  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.831347  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.831382  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.831429  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.831906  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I0929 11:17:11.832481  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.832856  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.833247  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.833628  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.833694  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.834239  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.834503  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.834941  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.835816  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.836285  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I0929 11:17:11.830263  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.836413  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.836899  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.837295  370114 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:17:11.837574  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.838127  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.838958  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.839510  370114 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:17:11.839964  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.840010  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.839982  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0929 11:17:11.840664  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.840742  370114 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:17:11.840881  370114 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:17:11.840926  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:17:11.840949  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.841047  370114 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:17:11.841197  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.841405  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.841755  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.841776  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.842191  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.842306  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.842350  370114 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:17:11.842363  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:17:11.842398  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.842484  370114 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:17:11.842495  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:17:11.842513  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.843183  370114 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-965504"
	I0929 11:17:11.843238  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.843603  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.843627  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.844023  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.844096  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I0929 11:17:11.844624  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.844067  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.845434  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.845456  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.845545  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0929 11:17:11.845575  370114 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:17:11.845907  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.846775  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0929 11:17:11.846823  370114 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:17:11.846843  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:17:11.846867  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.847765  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0929 11:17:11.847934  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.849609  370114 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:17:11.851921  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.853009  370114 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:17:11.853032  370114 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:17:11.853056  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.853244  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0929 11:17:11.853808  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.854077  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.854119  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.855512  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.855736  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.856294  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I0929 11:17:11.856650  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.856671  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.856701  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.857214  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.857233  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.858629  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.859048  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.859141  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.859211  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37789
	I0929 11:17:11.859355  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.859538  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.859563  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.859602  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.859631  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.859642  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.859663  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.859682  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.859716  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.859767  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.859768  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.859798  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.859846  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.859898  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.859911  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.860073  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.860134  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.860303  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.860442  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.860795  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.860828  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.860962  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.861352  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.861577  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.861297  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.861291  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.861805  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.861853  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.861332  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.861415  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.862326  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.861483  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.862674  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.862695  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.862710  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.862854  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.862854  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.862867  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.862988  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.863037  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.863081  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.863131  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.863151  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.863208  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.863271  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.863312  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.863462  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.864374  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.864455  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.864544  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.864738  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.865743  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.866112  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.866769  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.868686  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.869701  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.870689  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0929 11:17:11.871013  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I0929 11:17:11.871044  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.871108  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.871325  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:17:11.871669  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.872136  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.872230  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.872292  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.872345  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.872300  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.872715  370114 addons.go:238] Setting addon default-storageclass=true in "addons-965504"
	I0929 11:17:11.872727  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.872754  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:11.872948  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.872958  370114 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:17:11.873117  370114 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:17:11.873184  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.873267  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.873619  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.873641  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.874062  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.874216  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.874588  370114 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:17:11.874614  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:17:11.874632  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.875133  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:17:11.875256  370114 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:17:11.875279  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:17:11.875297  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.877140  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.877231  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I0929 11:17:11.877368  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:17:11.877855  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:11.877880  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:11.878674  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:11.879436  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:17:11.880571  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.881989  370114 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:17:11.882239  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.883181  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.883228  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.883295  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.883324  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.883449  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:17:11.883567  370114 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:17:11.883580  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:17:11.883601  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.883675  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.884754  370114 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0929 11:17:11.884770  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:11.884768  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.884780  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:11.884787  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:11.884811  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.884821  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.884828  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.884839  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.884844  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.885219  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:11.885230  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	W0929 11:17:11.885336  370114 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 11:17:11.885408  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.885706  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.886128  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.886525  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:17:11.886707  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.887627  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.887885  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.888118  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.888685  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0929 11:17:11.888998  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.889428  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:17:11.890014  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.890469  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 11:17:11.891137  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.891161  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.891483  370114 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:17:11.891576  370114 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:17:11.891591  370114 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:17:11.891611  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.892257  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.892500  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.893024  370114 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:17:11.893045  370114 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:17:11.893074  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.894213  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.895335  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0929 11:17:11.895867  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.895985  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.896631  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.896679  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.897175  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.897564  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.897910  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.897935  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.898366  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.898528  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40321
	I0929 11:17:11.898558  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.898674  370114 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:17:11.898824  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.899105  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.899221  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.899428  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I0929 11:17:11.899693  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41529
	I0929 11:17:11.899957  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.900040  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.900157  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.900241  370114 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:17:11.900272  370114 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:17:11.900301  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.900473  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.900685  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.900700  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.900874  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.901152  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.901277  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.901335  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.901563  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.902329  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.901747  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.902437  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.901982  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.902034  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.902148  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.902182  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.902796  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.902818  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:11.902865  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:11.902880  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.903004  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.903023  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.903061  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.903214  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.903261  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.903391  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.903608  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.903752  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.903958  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.903964  370114 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:17:11.905211  370114 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:17:11.905231  370114 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:17:11.905250  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.905906  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.906719  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.906740  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.907068  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.907542  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.907812  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.908014  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.908187  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.908588  370114 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:17:11.909829  370114 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:17:11.910254  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0929 11:17:11.910623  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.910739  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.911114  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.911142  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.911282  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.911299  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.911282  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.911500  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.911665  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.911727  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.911899  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.912049  370114 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:17:11.912135  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.913235  370114 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:17:11.913253  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:17:11.913269  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.916905  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.917373  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.917398  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.917772  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.917943  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.918130  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.918285  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.922194  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0929 11:17:11.922709  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.923176  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.923203  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.923536  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.923747  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.925700  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.926012  370114 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:17:11.926030  370114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:17:11.926050  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.926217  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I0929 11:17:11.926604  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:11.927108  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:11.927146  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:11.927558  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:11.927802  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:11.929641  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.929656  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:11.930146  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.930178  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.930341  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.930535  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.930764  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.930925  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:11.931654  370114 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:17:11.932885  370114 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 11:17:11.934196  370114 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:17:11.934215  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:17:11.934230  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:11.937155  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.937575  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:11.937597  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:11.937824  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:11.938083  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:11.938242  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:11.938423  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:12.689615  370114 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:17:12.689647  370114 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:17:12.738237  370114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:17:12.738274  370114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:17:12.738282  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:17:12.762207  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:17:12.782328  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:17:12.881228  370114 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:12.881265  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:17:12.890218  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:17:12.896267  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:17:12.983041  370114 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:17:12.983078  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:17:13.160791  370114 node_ready.go:35] waiting up to 6m0s for node "addons-965504" to be "Ready" ...
	I0929 11:17:13.162711  370114 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:17:13.162735  370114 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:17:13.163254  370114 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:17:13.163281  370114 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:17:13.169649  370114 node_ready.go:49] node "addons-965504" is "Ready"
	I0929 11:17:13.169672  370114 node_ready.go:38] duration metric: took 8.846614ms for node "addons-965504" to be "Ready" ...
	I0929 11:17:13.169687  370114 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:17:13.169736  370114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:17:13.171549  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:17:13.206505  370114 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:17:13.206545  370114 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:17:13.215783  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:17:13.249491  370114 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:17:13.249517  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:17:13.289569  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:17:13.307496  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:17:13.463614  370114 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:17:13.463656  370114 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:17:13.472060  370114 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:17:13.472091  370114 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:17:13.474235  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:13.507005  370114 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:17:13.507035  370114 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:17:13.549861  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:17:13.552074  370114 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:17:13.552097  370114 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:17:13.768981  370114 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:17:13.769015  370114 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:17:13.795790  370114 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:17:13.795823  370114 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:17:13.874196  370114 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:17:13.874223  370114 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:17:13.884842  370114 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:17:13.884870  370114 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:17:14.046100  370114 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:17:14.046137  370114 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:17:14.072826  370114 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:17:14.072859  370114 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:17:14.114774  370114 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:17:14.114799  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:17:14.159742  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:17:14.288536  370114 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:17:14.288562  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:17:14.333299  370114 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:17:14.333331  370114 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:17:14.437094  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:17:14.537057  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:17:14.611730  370114 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:17:14.611757  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:17:15.028642  370114 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:17:15.028674  370114 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:17:15.502181  370114 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.763853898s)
	I0929 11:17:15.502229  370114 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0929 11:17:15.502279  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.763970326s)
	I0929 11:17:15.502316  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:15.502335  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:15.502347  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.740091571s)
	I0929 11:17:15.502400  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:15.502412  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:15.502455  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.720086787s)
	I0929 11:17:15.502489  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:15.502505  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:15.502667  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:15.502729  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:15.502738  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:15.502747  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:15.502754  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:15.502865  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:15.502891  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:15.502901  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:15.502904  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:15.502913  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:15.502920  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:15.502924  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:15.502930  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:15.502938  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:15.502943  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:15.504945  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:15.505010  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:15.505033  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:15.505040  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:15.505457  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:15.505468  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:15.505643  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:15.505660  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:15.532835  370114 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:17:15.532865  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:17:15.900509  370114 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:17:15.900537  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:17:16.081904  370114 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-965504" context rescaled to 1 replicas
	I0929 11:17:16.157139  370114 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:17:16.157169  370114 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:17:16.540498  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:17:17.046673  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.150363862s)
	I0929 11:17:17.046732  370114 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.876977621s)
	I0929 11:17:17.046747  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:17.046759  370114 api_server.go:72] duration metric: took 5.300038753s to wait for apiserver process to appear ...
	I0929 11:17:17.046767  370114 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:17:17.046792  370114 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I0929 11:17:17.046760  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:17.047186  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:17.047205  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:17.047215  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:17.047222  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:17.047242  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.156978139s)
	I0929 11:17:17.047281  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:17.047308  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:17.047504  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:17.047505  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:17.047525  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:17.047707  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:17.047733  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:17.047746  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:17.047754  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:17.047762  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:17.048024  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:17.048038  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:17.048030  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:17.085086  370114 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I0929 11:17:17.104155  370114 api_server.go:141] control plane version: v1.34.0
	I0929 11:17:17.104201  370114 api_server.go:131] duration metric: took 57.422516ms to wait for apiserver health ...
	I0929 11:17:17.104215  370114 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:17:17.159527  370114 system_pods.go:59] 13 kube-system pods found
	I0929 11:17:17.159584  370114 system_pods.go:61] "amd-gpu-device-plugin-t8rkt" [214ad880-85fe-4b9e-9ff8-356871df65cf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 11:17:17.159595  370114 system_pods.go:61] "coredns-66bc5c9577-285tb" [890f5f5e-f938-4859-911a-76df4c079c7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:17:17.159608  370114 system_pods.go:61] "coredns-66bc5c9577-pc9fb" [5d442e7e-0f52-435c-b8c9-6e17edcab4d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:17:17.159616  370114 system_pods.go:61] "etcd-addons-965504" [1e12ae5b-6593-4628-8b88-6122ce6d8594] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:17:17.159625  370114 system_pods.go:61] "kube-apiserver-addons-965504" [ae963eca-74af-4bde-9858-95ed1dc11890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:17:17.159640  370114 system_pods.go:61] "kube-controller-manager-addons-965504" [54ad9e02-bcbf-46a2-b4a2-f79991245a74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:17:17.159650  370114 system_pods.go:61] "kube-proxy-dhkpx" [eed0693f-15ae-42ed-9bfa-8d992b2cd1ad] Running
	I0929 11:17:17.159660  370114 system_pods.go:61] "kube-scheduler-addons-965504" [8a24273e-9282-4a04-858b-68cd4a6438dd] Running
	I0929 11:17:17.159668  370114 system_pods.go:61] "nvidia-device-plugin-daemonset-4gm9t" [b4805764-22e3-451f-ad53-ca25d7965722] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:17:17.159679  370114 system_pods.go:61] "registry-66898fdd98-l6ndb" [1cf4fa8a-dd96-410b-be59-11d26570dc2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:17:17.159688  370114 system_pods.go:61] "registry-creds-764b6fb674-f4hjq" [5391d599-8a4d-43b5-b808-d56c54b4d760] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:17:17.159699  370114 system_pods.go:61] "registry-proxy-86hvz" [e0c0f7c7-7f2e-460b-abcd-0429df2def6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:17:17.159714  370114 system_pods.go:61] "storage-provisioner" [b5abf2f6-fbaa-408e-8e41-ecf4a4e30109] Pending
	I0929 11:17:17.159723  370114 system_pods.go:74] duration metric: took 55.500944ms to wait for pod list to return data ...
	I0929 11:17:17.159738  370114 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:17:17.187816  370114 default_sa.go:45] found service account: "default"
	I0929 11:17:17.187868  370114 default_sa.go:55] duration metric: took 28.121132ms for default service account to be created ...
	I0929 11:17:17.187882  370114 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:17:17.230208  370114 system_pods.go:86] 13 kube-system pods found
	I0929 11:17:17.230252  370114 system_pods.go:89] "amd-gpu-device-plugin-t8rkt" [214ad880-85fe-4b9e-9ff8-356871df65cf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 11:17:17.230266  370114 system_pods.go:89] "coredns-66bc5c9577-285tb" [890f5f5e-f938-4859-911a-76df4c079c7d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:17:17.230280  370114 system_pods.go:89] "coredns-66bc5c9577-pc9fb" [5d442e7e-0f52-435c-b8c9-6e17edcab4d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:17:17.230289  370114 system_pods.go:89] "etcd-addons-965504" [1e12ae5b-6593-4628-8b88-6122ce6d8594] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 11:17:17.230303  370114 system_pods.go:89] "kube-apiserver-addons-965504" [ae963eca-74af-4bde-9858-95ed1dc11890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 11:17:17.230313  370114 system_pods.go:89] "kube-controller-manager-addons-965504" [54ad9e02-bcbf-46a2-b4a2-f79991245a74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 11:17:17.230326  370114 system_pods.go:89] "kube-proxy-dhkpx" [eed0693f-15ae-42ed-9bfa-8d992b2cd1ad] Running
	I0929 11:17:17.230333  370114 system_pods.go:89] "kube-scheduler-addons-965504" [8a24273e-9282-4a04-858b-68cd4a6438dd] Running
	I0929 11:17:17.230342  370114 system_pods.go:89] "nvidia-device-plugin-daemonset-4gm9t" [b4805764-22e3-451f-ad53-ca25d7965722] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:17:17.230354  370114 system_pods.go:89] "registry-66898fdd98-l6ndb" [1cf4fa8a-dd96-410b-be59-11d26570dc2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:17:17.230362  370114 system_pods.go:89] "registry-creds-764b6fb674-f4hjq" [5391d599-8a4d-43b5-b808-d56c54b4d760] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:17:17.230376  370114 system_pods.go:89] "registry-proxy-86hvz" [e0c0f7c7-7f2e-460b-abcd-0429df2def6c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:17:17.230387  370114 system_pods.go:89] "storage-provisioner" [b5abf2f6-fbaa-408e-8e41-ecf4a4e30109] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:17:17.230398  370114 system_pods.go:126] duration metric: took 42.507675ms to wait for k8s-apps to be running ...
	I0929 11:17:17.230414  370114 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:17:17.230471  370114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:17:17.506881  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.335271193s)
	I0929 11:17:17.506967  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:17.507001  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:17.507341  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:17.507356  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:17.507372  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:17.507384  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:17.507623  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:17.507639  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:18.502627  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.286800821s)
	I0929 11:17:18.502691  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:18.502704  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:18.503063  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:18.503089  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:18.503098  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:18.503109  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:18.503121  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:18.503410  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:18.503430  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:18.629616  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:18.629650  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:18.629949  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:18.629988  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:18.630007  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:19.394568  370114 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:17:19.394611  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:19.398562  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:19.399092  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:19.399151  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:19.399343  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:19.399578  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:19.399843  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:19.400056  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:19.574454  370114 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:17:19.635408  370114 addons.go:238] Setting addon gcp-auth=true in "addons-965504"
	I0929 11:17:19.635468  370114 host.go:66] Checking if "addons-965504" exists ...
	I0929 11:17:19.635835  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:19.635867  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:19.650686  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0929 11:17:19.651303  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:19.651803  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:19.651828  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:19.652305  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:19.652867  370114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:17:19.652906  370114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:17:19.666671  370114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37859
	I0929 11:17:19.667239  370114 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:17:19.667757  370114 main.go:141] libmachine: Using API Version  1
	I0929 11:17:19.667785  370114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:17:19.668186  370114 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:17:19.668374  370114 main.go:141] libmachine: (addons-965504) Calling .GetState
	I0929 11:17:19.670353  370114 main.go:141] libmachine: (addons-965504) Calling .DriverName
	I0929 11:17:19.670655  370114 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:17:19.670714  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHHostname
	I0929 11:17:19.673801  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:19.674277  370114 main.go:141] libmachine: (addons-965504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:48:36", ip: ""} in network mk-addons-965504: {Iface:virbr1 ExpiryTime:2025-09-29 12:16:41 +0000 UTC Type:0 Mac:52:54:00:54:48:36 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-965504 Clientid:01:52:54:00:54:48:36}
	I0929 11:17:19.674326  370114 main.go:141] libmachine: (addons-965504) DBG | domain addons-965504 has defined IP address 192.168.39.82 and MAC address 52:54:00:54:48:36 in network mk-addons-965504
	I0929 11:17:19.674498  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHPort
	I0929 11:17:19.674685  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHKeyPath
	I0929 11:17:19.674836  370114 main.go:141] libmachine: (addons-965504) Calling .GetSSHUsername
	I0929 11:17:19.674990  370114 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/addons-965504/id_rsa Username:docker}
	I0929 11:17:20.365545  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.075913358s)
	I0929 11:17:20.365591  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.058065827s)
	I0929 11:17:20.365618  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.365630  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.365618  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.365855  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.365721  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.891441311s)
	W0929 11:17:20.365967  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:20.366014  370114 retry.go:31] will retry after 200.92419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:20.366023  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.206251102s)
	I0929 11:17:20.365772  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.815880892s)
	I0929 11:17:20.366049  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.366111  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366053  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.366110  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.928982426s)
	I0929 11:17:20.366186  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366195  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.366213  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366219  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.829122947s)
	W0929 11:17:20.366269  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:17:20.366293  370114 retry.go:31] will retry after 247.514495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:17:20.366375  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.366402  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.366407  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.366426  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.366446  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.366453  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366474  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.366434  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.366500  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.366506  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.366507  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.366516  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.366525  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366517  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.366568  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.366578  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.366586  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.366570  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366593  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366902  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.366903  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.366928  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.366961  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.367032  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.367047  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.366994  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.367245  370114 addons.go:479] Verifying addon metrics-server=true in "addons-965504"
	I0929 11:17:20.367033  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.368247  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.368257  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.366943  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.367011  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.368457  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.368739  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.368773  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.368779  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.368792  370114 addons.go:479] Verifying addon registry=true in "addons-965504"
	I0929 11:17:20.367012  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.369009  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.369021  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.369029  370114 addons.go:479] Verifying addon ingress=true in "addons-965504"
	I0929 11:17:20.370493  370114 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-965504 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:17:20.370566  370114 out.go:179] * Verifying registry addon...
	I0929 11:17:20.371446  370114 out.go:179] * Verifying ingress addon...
	I0929 11:17:20.373306  370114 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:17:20.374272  370114 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 11:17:20.454448  370114 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:17:20.454473  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:20.454490  370114 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:17:20.454510  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:20.479067  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:20.479090  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:20.479476  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:20.479507  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:20.479536  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:20.567953  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:20.614656  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:17:20.897505  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:20.911122  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:21.362888  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.822316962s)
	I0929 11:17:21.362957  370114 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.132454634s)
	I0929 11:17:21.363009  370114 system_svc.go:56] duration metric: took 4.132590788s WaitForService to wait for kubelet
	I0929 11:17:21.363022  370114 kubeadm.go:578] duration metric: took 9.616302107s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:17:21.363042  370114 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:17:21.363040  370114 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.692359653s)
	I0929 11:17:21.362961  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:21.363096  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:21.363420  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:21.363435  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:21.363445  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:21.363452  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:21.363677  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:21.363693  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:21.363719  370114 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-965504"
	I0929 11:17:21.364418  370114 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:17:21.365159  370114 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:17:21.366543  370114 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:17:21.367296  370114 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:17:21.367654  370114 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:17:21.367673  370114 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:17:21.389024  370114 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:17:21.389063  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:21.390847  370114 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 11:17:21.390890  370114 node_conditions.go:123] node cpu capacity is 2
	I0929 11:17:21.390909  370114 node_conditions.go:105] duration metric: took 27.8609ms to run NodePressure ...
	I0929 11:17:21.390928  370114 start.go:241] waiting for startup goroutines ...
	I0929 11:17:21.411603  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:21.412692  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:21.521362  370114 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:17:21.521397  370114 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:17:21.624534  370114 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:17:21.624559  370114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:17:21.753069  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:17:21.886777  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:21.888015  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:21.888410  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:22.375060  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:22.381833  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:22.382687  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:22.875727  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:22.891040  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:22.892840  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:23.380674  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:23.390143  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:23.390142  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:23.887478  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:23.887518  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:23.887658  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:23.938853  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.370821812s)
	I0929 11:17:23.938896  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.324163283s)
	I0929 11:17:23.938939  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.185837494s)
	W0929 11:17:23.938921  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:23.938959  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:23.938983  370114 retry.go:31] will retry after 481.518543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:23.938994  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:23.938988  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:23.939079  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:23.939420  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:23.939435  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:23.939438  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:23.939453  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:23.939462  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:23.939450  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:23.939481  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:23.939490  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:17:23.939499  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:17:23.939698  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:23.939715  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:23.939845  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:17:23.939856  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:17:23.939868  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:17:23.941075  370114 addons.go:479] Verifying addon gcp-auth=true in "addons-965504"
	I0929 11:17:23.942602  370114 out.go:179] * Verifying gcp-auth addon...
	I0929 11:17:23.944459  370114 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:17:23.978162  370114 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:17:23.978188  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:24.374585  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:24.378835  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:24.382288  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:24.421373  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:24.452280  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:24.873927  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:24.879068  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:24.885433  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:24.975357  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:25.385140  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:25.386377  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:25.388421  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:25.451596  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:25.689507  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.268085297s)
	W0929 11:17:25.689577  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:25.689609  370114 retry.go:31] will retry after 301.66389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:25.872992  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:25.877343  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:25.878847  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:25.948470  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:25.991850  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:26.371826  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:26.378697  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:26.381371  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:26.449479  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:17:26.703161  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:26.703208  370114 retry.go:31] will retry after 847.406044ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:26.877039  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:26.879151  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:26.879884  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:26.976993  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:27.372495  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:27.377610  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:27.377696  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:27.448302  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:27.551530  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:27.875015  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:27.880619  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:27.881510  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:27.975414  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:17:28.259235  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:28.259283  370114 retry.go:31] will retry after 1.163174636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:28.372313  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:28.376828  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:28.378805  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:28.452810  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:28.876663  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:28.880585  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:28.883771  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:28.951454  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:29.373841  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:29.377796  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:29.379003  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:29.423136  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:29.452579  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:29.880013  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:29.886461  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:29.889521  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:29.950200  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:30.373292  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:30.379918  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:30.379967  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:30.447498  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:30.648945  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.225771185s)
	W0929 11:17:30.649017  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:30.649046  370114 retry.go:31] will retry after 1.201517467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:30.880343  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:30.883456  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:30.884686  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:30.949174  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:31.373049  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:31.380527  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:31.382397  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:31.451168  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:31.850874  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:31.875753  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:31.881881  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:31.887697  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:31.958335  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:32.371384  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:32.383005  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:32.383220  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:32.453290  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:32.883295  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:32.888026  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:32.890962  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:32.952909  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:33.097220  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.246288246s)
	W0929 11:17:33.097268  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:33.097296  370114 retry.go:31] will retry after 2.28758323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:33.381583  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:33.384647  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:33.390212  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:33.456366  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:33.875356  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:33.880030  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:33.881389  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:33.949900  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:34.374219  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:34.377826  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:34.379675  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:34.448502  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:34.874473  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:34.886773  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:34.889671  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:34.953116  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:35.373600  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:35.377820  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:35.379894  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:35.385891  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:35.449825  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:35.873502  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:35.882649  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:35.883614  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:35.948541  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:36.371853  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:36.379624  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:36.380026  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:36.448423  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:36.650142  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.264200988s)
	W0929 11:17:36.650200  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:36.650228  370114 retry.go:31] will retry after 4.769575953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:36.880106  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:36.881426  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:36.884617  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:36.948722  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:37.394188  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:37.394872  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:37.396100  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:37.449050  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:37.899996  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:37.900313  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:37.900666  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:37.949499  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:38.373187  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:38.379184  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:38.381273  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:38.448885  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:38.873879  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:38.880774  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:38.881929  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:38.948962  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:39.483593  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:39.483824  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:39.483845  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:39.483915  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:39.883652  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:39.883762  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:39.883886  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:39.948654  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:40.380618  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:40.390621  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:40.390964  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:40.455079  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:40.884037  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:40.884899  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:40.886406  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:40.949373  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:41.371750  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:41.376280  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:41.377867  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:41.420985  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:41.448808  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:41.872960  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:41.880796  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:41.882087  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:41.951692  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:17:42.098509  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:42.098553  370114 retry.go:31] will retry after 8.757291485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:42.372070  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:42.377591  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:42.377841  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:42.447912  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:42.872446  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:42.877451  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:42.878700  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:42.949394  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:43.371592  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:43.378784  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:43.378938  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:43.447859  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:43.875714  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:43.881905  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:43.882152  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:43.947882  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:44.377121  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:44.379661  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:44.380759  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:44.448434  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:44.878682  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:44.888075  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:44.890815  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:44.948397  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:45.373326  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:45.375699  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:45.380151  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:45.450284  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:45.871717  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:45.876593  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:45.880144  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:45.948815  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:46.384849  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:46.385388  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:46.390570  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:46.448889  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:46.871368  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:46.879127  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:46.881702  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:46.951939  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:47.372261  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:47.377896  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:47.378351  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:47.448808  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:47.878603  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:47.882727  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:47.889628  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:47.951763  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:48.373549  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:48.378697  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:48.381399  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:48.449193  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:48.871643  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:48.878800  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:48.879515  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:48.949873  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:49.372238  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:49.378231  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:49.380196  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:49.449206  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:49.871828  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:49.877860  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:49.878504  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:49.949137  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:50.372100  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:50.377897  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:50.378399  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:50.448175  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:50.856738  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:50.874136  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:50.880854  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:50.881189  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:50.948773  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:51.377509  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:51.379450  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:51.381746  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:51.450353  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:51.871838  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:51.886410  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:51.887408  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:51.951610  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:52.005416  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.148626992s)
	W0929 11:17:52.005500  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:52.005530  370114 retry.go:31] will retry after 7.412308301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:17:52.377123  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:52.378398  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:52.378457  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:52.452338  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:52.878402  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:52.883030  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:52.884284  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:52.951208  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:53.374415  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:53.377786  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:53.379195  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:53.448045  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:53.881071  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:53.887313  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:53.887431  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:53.950227  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:54.374276  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:54.378268  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:54.380724  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:54.448567  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:54.885409  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:54.893410  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:54.894483  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:54.950164  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:55.375509  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:55.387455  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:55.472713  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:55.472808  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:55.873772  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:55.881590  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:55.885564  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:55.949729  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:56.373119  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:56.383892  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:56.383906  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:56.449070  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:56.878349  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:56.879796  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:56.883637  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:56.950014  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:57.374568  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:57.380403  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:57.382203  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:57.795828  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:57.881354  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:57.881447  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:57.883691  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:57.949638  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:58.372263  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:58.376871  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:58.381871  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:58.448107  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:58.874546  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:58.882169  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:58.885027  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:58.949405  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:59.371414  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:59.380045  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:59.380172  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:59.418314  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:17:59.474328  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:17:59.885880  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:17:59.889833  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:17:59.890800  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:17:59.948720  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:18:00.328264  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:18:00.328306  370114 retry.go:31] will retry after 9.485734408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:18:00.371347  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:00.377108  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:00.377294  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:00.449920  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:00.873155  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:00.878280  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:00.881265  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:00.948665  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:01.371465  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:01.375968  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:01.377861  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:01.448843  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:01.888295  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:01.888770  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:01.888953  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:01.949834  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:02.372801  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:02.379160  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:02.380287  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:02.449669  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:02.879141  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:02.883249  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:02.884044  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:02.951114  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:03.373566  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:03.376276  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:03.377962  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:03.448576  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:03.872404  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:03.879057  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:03.879557  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:03.947726  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:04.427333  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:04.428673  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:04.429099  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:04.450127  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:04.885273  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:04.885352  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:04.885729  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:04.950204  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:05.377007  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:05.377509  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:05.381605  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:05.449330  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:05.884798  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:05.884875  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:05.887163  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:05.950047  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:06.372636  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:06.378793  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:06.381228  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:06.447918  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:06.879376  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:06.879682  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:06.883365  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:06.951900  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:07.609080  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:07.609235  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:07.609386  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:07.609927  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:07.880002  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:07.885349  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:07.887590  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:07.949512  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:08.406237  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:08.406235  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:08.406962  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:08.502626  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:08.872101  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:08.877502  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:08.880056  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:08.948418  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:09.372862  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:09.378219  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:09.379106  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:09.448754  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:09.814222  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:18:09.878527  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:09.882090  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:09.885961  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:09.951513  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:10.374423  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:10.381753  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:10.382774  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:10.449141  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:18:10.677840  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:18:10.677890  370114 retry.go:31] will retry after 13.403325758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:18:10.873449  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:10.877533  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:10.885527  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:10.949520  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:11.371847  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:11.380659  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:11.382485  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:11.450863  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:11.880277  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:11.880398  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:11.880581  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:11.949316  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:12.373099  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:12.376522  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:12.378542  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:12.448241  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:12.872323  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:12.877672  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:12.879643  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:12.947764  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:13.371271  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:13.377614  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:13.377996  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:18:13.448338  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:13.871646  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:13.876908  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:13.879198  370114 kapi.go:107] duration metric: took 53.505891563s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:18:13.948483  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:14.371097  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:14.379565  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:14.447791  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:14.878378  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:14.887521  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:14.948710  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:15.373351  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:15.379345  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:15.449669  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:15.875774  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:15.879953  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:15.948552  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:16.373447  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:16.378791  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:16.448821  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:16.880821  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:16.884678  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:16.950382  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:17.374295  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:17.379718  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:17.450237  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:18.523392  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:18.541467  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:18.541540  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:18.541748  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:18.546203  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:18.548284  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:18.874139  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:18.879312  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:18.948419  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:19.371506  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:19.377707  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:19.447892  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:19.873047  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:19.879304  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:19.948239  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:20.370938  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:20.377466  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:20.448527  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:20.876760  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:20.881659  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:20.949402  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:21.374341  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:21.379276  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:21.474340  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:21.878751  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:21.878828  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:21.948246  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:22.376239  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:22.383602  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:22.449922  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:22.895228  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:22.895449  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:22.983066  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:23.371278  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:23.378245  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:23.452630  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:23.872333  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:23.880400  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:23.949759  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:24.081939  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:18:24.373815  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:24.380720  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:24.455281  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:24.887000  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:24.887082  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:24.951532  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:25.373667  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:25.382438  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:25.449461  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:25.464491  370114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.382505449s)
	W0929 11:18:25.464559  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:18:25.464590  370114 retry.go:31] will retry after 41.241874612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:18:25.876192  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:25.878735  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:25.947947  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:26.371463  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:26.377986  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:26.449796  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:26.883391  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:26.883468  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:26.948241  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:27.371776  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:27.378626  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:27.473305  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:27.872742  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:27.877192  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:27.950530  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:28.374221  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:28.378268  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:28.453379  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:28.872892  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:28.879359  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:28.972684  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:29.373318  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:29.383086  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:29.451081  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:29.877864  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:29.883461  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:29.948532  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:30.381215  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:30.381987  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:30.477427  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:30.877592  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:30.879927  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:30.949649  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:31.372379  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:31.382014  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:31.449389  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:31.879965  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:31.886410  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:31.949472  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:32.377168  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:32.380840  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:32.448922  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:32.898091  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:32.902280  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:32.948734  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:33.372256  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:33.381434  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:33.450818  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:33.876960  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:33.883415  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:33.956527  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:34.374298  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:34.378958  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:34.474042  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:34.870653  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:34.882525  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:34.951913  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:35.380463  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:35.382651  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:35.765996  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:35.872363  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:35.881128  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:35.948948  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:36.372209  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:36.379192  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:36.449208  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:36.873924  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:36.882312  370114 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:18:36.947861  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:37.373469  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:37.380575  370114 kapi.go:107] duration metric: took 1m17.006303321s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:18:37.448793  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:37.882295  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:37.953114  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:38.376440  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:38.476868  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:38.877887  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:38.947961  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:39.564775  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:39.661162  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:39.870520  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:39.948827  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:40.373730  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:40.475901  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:40.951559  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:40.953182  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:41.372178  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:41.448171  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:41.874269  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:41.950190  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:42.374454  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:42.450968  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:42.873420  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:42.948862  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:43.374878  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:43.449499  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:43.882517  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:43.949763  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:44.371753  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:18:44.447526  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:44.875283  370114 kapi.go:107] duration metric: took 1m23.507976611s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:18:44.948158  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:45.447960  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:45.949344  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:46.449015  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:46.947864  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:47.448778  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:47.949209  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:48.449607  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:48.949320  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:49.448245  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:49.947799  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:50.449341  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:50.947989  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:51.448320  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:51.949221  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:52.448023  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:52.948035  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:53.448272  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:53.948062  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:54.447815  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:54.951746  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:55.448782  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:55.948737  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:56.449119  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:56.949365  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:57.448709  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:57.948111  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:58.447336  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:58.947565  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:59.450327  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:18:59.947917  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:00.448853  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:00.948439  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:01.448542  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:01.948747  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:02.449323  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:02.947468  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:03.448966  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:03.948749  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:04.448805  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:04.948997  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:05.448948  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:05.948727  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:06.448735  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:06.707108  370114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:19:06.949200  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:07.449835  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:19:07.454515  370114 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:19:07.454599  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:19:07.454624  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:19:07.454929  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:19:07.454948  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	I0929 11:19:07.454952  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:19:07.454992  370114 main.go:141] libmachine: Making call to close driver server
	I0929 11:19:07.455004  370114 main.go:141] libmachine: (addons-965504) Calling .Close
	I0929 11:19:07.455248  370114 main.go:141] libmachine: Successfully made call to close driver server
	I0929 11:19:07.455278  370114 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 11:19:07.455252  370114 main.go:141] libmachine: (addons-965504) DBG | Closing plugin on server side
	W0929 11:19:07.455399  370114 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:19:07.948068  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:08.448870  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:08.947803  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:09.448586  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:09.949385  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:10.448663  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:10.948967  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:11.448962  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:11.947635  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:12.447951  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:12.947624  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:13.449470  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:13.948210  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:14.447561  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:14.949403  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:15.448485  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:15.948965  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:16.449560  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:16.948871  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:17.447984  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:17.948515  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:18.448783  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:18.948546  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:19.448126  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:19.948192  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:20.447881  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:20.947958  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:21.449694  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:21.950053  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:22.448430  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:22.948650  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:23.449703  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:23.949476  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:24.448515  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:24.949111  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:25.449282  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:25.948878  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:26.448614  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:26.948641  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:27.448378  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:27.948600  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:28.448512  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:28.948389  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:29.447898  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:29.947615  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:30.448528  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:30.949165  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:31.449197  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:31.948424  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:32.448047  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:32.948193  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:33.447520  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:33.949186  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:34.448754  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:34.948306  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:35.448814  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:35.949582  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:36.448654  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:36.948189  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:37.448649  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:37.949384  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:38.448590  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:38.948879  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:39.448682  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:39.949582  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:40.448346  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:40.948791  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:41.450772  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:41.955613  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:42.449569  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:42.949012  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:43.448141  370114 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:19:43.948837  370114 kapi.go:107] duration metric: took 2m20.004378363s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:19:43.950400  370114 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-965504 cluster.
	I0929 11:19:43.951660  370114 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:19:43.952736  370114 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:19:43.953830  370114 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0929 11:19:43.954995  370114 addons.go:514] duration metric: took 2m32.208228026s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin registry-creds cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0929 11:19:43.955044  370114 start.go:246] waiting for cluster config update ...
	I0929 11:19:43.955064  370114 start.go:255] writing updated cluster config ...
	I0929 11:19:43.955360  370114 ssh_runner.go:195] Run: rm -f paused
	I0929 11:19:43.961554  370114 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:19:43.965686  370114 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-285tb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:43.970636  370114 pod_ready.go:94] pod "coredns-66bc5c9577-285tb" is "Ready"
	I0929 11:19:43.970660  370114 pod_ready.go:86] duration metric: took 4.950243ms for pod "coredns-66bc5c9577-285tb" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:43.973182  370114 pod_ready.go:83] waiting for pod "etcd-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:43.979894  370114 pod_ready.go:94] pod "etcd-addons-965504" is "Ready"
	I0929 11:19:43.979919  370114 pod_ready.go:86] duration metric: took 6.718412ms for pod "etcd-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:43.982485  370114 pod_ready.go:83] waiting for pod "kube-apiserver-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:43.988526  370114 pod_ready.go:94] pod "kube-apiserver-addons-965504" is "Ready"
	I0929 11:19:43.988547  370114 pod_ready.go:86] duration metric: took 6.035875ms for pod "kube-apiserver-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:43.990937  370114 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:44.367310  370114 pod_ready.go:94] pod "kube-controller-manager-addons-965504" is "Ready"
	I0929 11:19:44.367338  370114 pod_ready.go:86] duration metric: took 376.383082ms for pod "kube-controller-manager-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:44.565601  370114 pod_ready.go:83] waiting for pod "kube-proxy-dhkpx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:44.966443  370114 pod_ready.go:94] pod "kube-proxy-dhkpx" is "Ready"
	I0929 11:19:44.966471  370114 pod_ready.go:86] duration metric: took 400.846175ms for pod "kube-proxy-dhkpx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:45.165656  370114 pod_ready.go:83] waiting for pod "kube-scheduler-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:45.566454  370114 pod_ready.go:94] pod "kube-scheduler-addons-965504" is "Ready"
	I0929 11:19:45.566483  370114 pod_ready.go:86] duration metric: took 400.802027ms for pod "kube-scheduler-addons-965504" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:19:45.566497  370114 pod_ready.go:40] duration metric: took 1.604907498s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:19:45.612661  370114 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:19:45.614185  370114 out.go:179] * Done! kubectl is now configured to use "addons-965504" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.892832158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6236a1a-066d-4811-88b1-69ea08b01e03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.893445828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7db5fa5509ceaf2ad01a36effe85f7d0ef3c1bc4403a386e8d43d02f3d70d4dc,PodSandboxId:dc99bde55006be2b4b0f830fa10ae83fe1fb14730b97ca21424282a683865cf8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759144831102179330,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c1df5c9-5d1d-4ca1-8e0c-f071fa132701,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c5c1ae64e18f9bf9ba3d196622188e6034a3eabf4dbbf398428d00ce9981f93,PodSandboxId:809d3b7734cb7abc27842decdff6f87bfd70d2265d4197d7e4afc0dcb4c5a16e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759144789812652581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4801144-474a-40cc-9c33-ddafb69eddc6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e0d1e0e19335de15b574ac86a346d5455f80225b8eb1db77267a01cfd3d79,PodSandboxId:60b4f421aa0d51dcacd4b0a62990361c5516ef7936caf4859baa80c8bf69d10e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759144717200670843,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6pnrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78e710f9-2e82-4c6f-964d-678b68382cba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d99ef039dac39f971b53b57b2920bee0ef2dc5bdcce8c1b698551b610ecbf120,PodSandboxId:a016d6ac2cbf052a0d74be8ff3f0c980b1e6c1cd173ef9b9405b93d3548e3692,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702999402774,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j7szj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65199dac-07ab-4593-9940-382a7e7f269a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb97f5e2b75bbba42614552184fafd2df8012db2ca4b253c996de5fc5222fab,PodSandboxId:7b1210579eebffaaa6d8fed5ea662ffffca3a3060db1080dc8a9a28f1657fbfb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702866828650,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8js7x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 20e97369-7b0e-48c1-be26-319804621cdb,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9c94169ef6b410a52a0f7664c49f3c6ebcf3141a3e10b4ee1feda14c6f48f4,PodSandboxId:33bf84dabeb8163787de677cdbf9e2f8cf84a277f50e80c1e205c19d018f60d0,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759144688245180912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z5qw5,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: df216355-05ab-43a4-8442-3cd9730f5c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3978b9603d78200518c6e7e1bbc414220ad80f010b44921652dbcdea25d3823d,PodSandboxId:c06748929a6d22a47e8f33300842cd6b11e3152f3cdca1e96f8ff41858082d61,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759144679330042609,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410e0153-dff2-4ae4-8f05-c77978d36332,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3caa0d48f1b2d83facd2e6eea82d083363506db84e58fd397780a082e6488677,PodSandboxId:12845a5a3e725e5d5e850e8cee7abf3a2c53ffe30d17d5c
c68885cd02e1788c5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759144645109452267,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t8rkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214ad880-85fe-4b9e-9ff8-356871df65cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c5fbab44af81c4dea6e51856ccfdeaad44858b1c1744d2f22481155bbda9a5,PodSandboxId:f4dcd22
5d638446e50ceb0ccf741b6a0504ae62d6ebc161ede5558d498a14e52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759144639632374621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5abf2f6-fbaa-408e-8e41-ecf4a4e30109,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcc403c4338ff51f0879576616b0c93da2117f7969918d239a76945b99120cc,PodSandboxId:54881e5ddcf910e46f0
880e68530256ca183f5bf2650df83cfe7dd6b258d43ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759144632845837392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-285tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 890f5f5e-f938-4859-911a-76df4c079c7d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a456f80188b9d9b7a27e5d6b081ce7917e59d3329af8828f220cff7d56cebf44,PodSandboxId:ff210ffabfa7d461ce50a49794fb00cdf653d09f094381c6240dc1559d3b3b25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759144631983430744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed0693f-15ae-42ed-9bfa-8d992b2cd1ad,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8eeaf40c58444600e020b4594b4efcc397302c41ce9e0f8190c48656ed728a3,PodSandboxId:16ad32a0a7ec693ad837a42a8f77cc611709cb3acb291181866283dec110590b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759144620357386078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24aec056b5132d45b8d4224be2541560,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354bbdf20e78d02d960b753a77b22181f5cb1f9beafe5bda721b1ca79b430244,PodSandboxId:ea8fce42df9367e0bc5623b4d06f99fd35751d684525c018a27020130ee596fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759144620328476038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed6dad664cebb9727e54227fcbbfc9a3,},
Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe18edfe1adedb4b2aa5743d4b3c1c687272f06849f803da298f3711de67371,PodSandboxId:d51baff335b082f6e9f863b3c2e4c18c9fbbfb325b8fb2d40d6972522b095213,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759144620132009955,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-
965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa070a0b54dff8a41f1a153d4e5fd880,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9e5ef64cbfc127e110f63224c339f0faeb53e3ea626bef5666375b8adfdee,PodSandboxId:8f1e2d001262f928005144a873f089af9e897b4baab66bdad107d03c7bfe0189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759144619991729289,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84885c20c54ae697f1810819f1ed4653,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6236a1a-066d-4811-88b1-69ea08b01e03 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.938758318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e956caeb-aca7-432c-ad71-5ad2ca6a3c38 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.938845303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e956caeb-aca7-432c-ad71-5ad2ca6a3c38 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.940515719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11be395d-44e3-48a5-a5fc-35c78f2e7f56 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.942547688Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759144973942481341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596878,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11be395d-44e3-48a5-a5fc-35c78f2e7f56 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.943399647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=991ad5c6-5e39-4543-bc9e-5b85cee4348f name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.943699225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=991ad5c6-5e39-4543-bc9e-5b85cee4348f name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.944413786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7db5fa5509ceaf2ad01a36effe85f7d0ef3c1bc4403a386e8d43d02f3d70d4dc,PodSandboxId:dc99bde55006be2b4b0f830fa10ae83fe1fb14730b97ca21424282a683865cf8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759144831102179330,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c1df5c9-5d1d-4ca1-8e0c-f071fa132701,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c5c1ae64e18f9bf9ba3d196622188e6034a3eabf4dbbf398428d00ce9981f93,PodSandboxId:809d3b7734cb7abc27842decdff6f87bfd70d2265d4197d7e4afc0dcb4c5a16e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759144789812652581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4801144-474a-40cc-9c33-ddafb69eddc6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e0d1e0e19335de15b574ac86a346d5455f80225b8eb1db77267a01cfd3d79,PodSandboxId:60b4f421aa0d51dcacd4b0a62990361c5516ef7936caf4859baa80c8bf69d10e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759144717200670843,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6pnrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78e710f9-2e82-4c6f-964d-678b68382cba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d99ef039dac39f971b53b57b2920bee0ef2dc5bdcce8c1b698551b610ecbf120,PodSandboxId:a016d6ac2cbf052a0d74be8ff3f0c980b1e6c1cd173ef9b9405b93d3548e3692,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702999402774,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j7szj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65199dac-07ab-4593-9940-382a7e7f269a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb97f5e2b75bbba42614552184fafd2df8012db2ca4b253c996de5fc5222fab,PodSandboxId:7b1210579eebffaaa6d8fed5ea662ffffca3a3060db1080dc8a9a28f1657fbfb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702866828650,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8js7x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 20e97369-7b0e-48c1-be26-319804621cdb,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9c94169ef6b410a52a0f7664c49f3c6ebcf3141a3e10b4ee1feda14c6f48f4,PodSandboxId:33bf84dabeb8163787de677cdbf9e2f8cf84a277f50e80c1e205c19d018f60d0,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759144688245180912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z5qw5,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: df216355-05ab-43a4-8442-3cd9730f5c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3978b9603d78200518c6e7e1bbc414220ad80f010b44921652dbcdea25d3823d,PodSandboxId:c06748929a6d22a47e8f33300842cd6b11e3152f3cdca1e96f8ff41858082d61,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759144679330042609,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410e0153-dff2-4ae4-8f05-c77978d36332,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3caa0d48f1b2d83facd2e6eea82d083363506db84e58fd397780a082e6488677,PodSandboxId:12845a5a3e725e5d5e850e8cee7abf3a2c53ffe30d17d5c
c68885cd02e1788c5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759144645109452267,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t8rkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214ad880-85fe-4b9e-9ff8-356871df65cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c5fbab44af81c4dea6e51856ccfdeaad44858b1c1744d2f22481155bbda9a5,PodSandboxId:f4dcd22
5d638446e50ceb0ccf741b6a0504ae62d6ebc161ede5558d498a14e52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759144639632374621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5abf2f6-fbaa-408e-8e41-ecf4a4e30109,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcc403c4338ff51f0879576616b0c93da2117f7969918d239a76945b99120cc,PodSandboxId:54881e5ddcf910e46f0
880e68530256ca183f5bf2650df83cfe7dd6b258d43ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759144632845837392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-285tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 890f5f5e-f938-4859-911a-76df4c079c7d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a456f80188b9d9b7a27e5d6b081ce7917e59d3329af8828f220cff7d56cebf44,PodSandboxId:ff210ffabfa7d461ce50a49794fb00cdf653d09f094381c6240dc1559d3b3b25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759144631983430744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed0693f-15ae-42ed-9bfa-8d992b2cd1ad,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8eeaf40c58444600e020b4594b4efcc397302c41ce9e0f8190c48656ed728a3,PodSandboxId:16ad32a0a7ec693ad837a42a8f77cc611709cb3acb291181866283dec110590b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759144620357386078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24aec056b5132d45b8d4224be2541560,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354bbdf20e78d02d960b753a77b22181f5cb1f9beafe5bda721b1ca79b430244,PodSandboxId:ea8fce42df9367e0bc5623b4d06f99fd35751d684525c018a27020130ee596fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759144620328476038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed6dad664cebb9727e54227fcbbfc9a3,},
Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe18edfe1adedb4b2aa5743d4b3c1c687272f06849f803da298f3711de67371,PodSandboxId:d51baff335b082f6e9f863b3c2e4c18c9fbbfb325b8fb2d40d6972522b095213,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759144620132009955,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-
965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa070a0b54dff8a41f1a153d4e5fd880,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9e5ef64cbfc127e110f63224c339f0faeb53e3ea626bef5666375b8adfdee,PodSandboxId:8f1e2d001262f928005144a873f089af9e897b4baab66bdad107d03c7bfe0189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759144619991729289,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84885c20c54ae697f1810819f1ed4653,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=991ad5c6-5e39-4543-bc9e-5b85cee4348f name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.983970615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2662f0de-9987-452f-9af3-4fea98554b9d name=/runtime.v1.RuntimeService/Version
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.984052862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2662f0de-9987-452f-9af3-4fea98554b9d name=/runtime.v1.RuntimeService/Version
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.985447177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3d46d32-0130-476b-9a73-f1ac7a6ba91d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.988184290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759144973988107526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596878,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3d46d32-0130-476b-9a73-f1ac7a6ba91d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.989130681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f81e10fd-cc99-4ddf-a33c-5bd236e3c620 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.989228271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f81e10fd-cc99-4ddf-a33c-5bd236e3c620 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.990081241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7db5fa5509ceaf2ad01a36effe85f7d0ef3c1bc4403a386e8d43d02f3d70d4dc,PodSandboxId:dc99bde55006be2b4b0f830fa10ae83fe1fb14730b97ca21424282a683865cf8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759144831102179330,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c1df5c9-5d1d-4ca1-8e0c-f071fa132701,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c5c1ae64e18f9bf9ba3d196622188e6034a3eabf4dbbf398428d00ce9981f93,PodSandboxId:809d3b7734cb7abc27842decdff6f87bfd70d2265d4197d7e4afc0dcb4c5a16e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759144789812652581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4801144-474a-40cc-9c33-ddafb69eddc6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e0d1e0e19335de15b574ac86a346d5455f80225b8eb1db77267a01cfd3d79,PodSandboxId:60b4f421aa0d51dcacd4b0a62990361c5516ef7936caf4859baa80c8bf69d10e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759144717200670843,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6pnrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78e710f9-2e82-4c6f-964d-678b68382cba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d99ef039dac39f971b53b57b2920bee0ef2dc5bdcce8c1b698551b610ecbf120,PodSandboxId:a016d6ac2cbf052a0d74be8ff3f0c980b1e6c1cd173ef9b9405b93d3548e3692,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702999402774,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j7szj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65199dac-07ab-4593-9940-382a7e7f269a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb97f5e2b75bbba42614552184fafd2df8012db2ca4b253c996de5fc5222fab,PodSandboxId:7b1210579eebffaaa6d8fed5ea662ffffca3a3060db1080dc8a9a28f1657fbfb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702866828650,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8js7x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 20e97369-7b0e-48c1-be26-319804621cdb,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9c94169ef6b410a52a0f7664c49f3c6ebcf3141a3e10b4ee1feda14c6f48f4,PodSandboxId:33bf84dabeb8163787de677cdbf9e2f8cf84a277f50e80c1e205c19d018f60d0,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759144688245180912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z5qw5,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: df216355-05ab-43a4-8442-3cd9730f5c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3978b9603d78200518c6e7e1bbc414220ad80f010b44921652dbcdea25d3823d,PodSandboxId:c06748929a6d22a47e8f33300842cd6b11e3152f3cdca1e96f8ff41858082d61,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759144679330042609,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410e0153-dff2-4ae4-8f05-c77978d36332,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3caa0d48f1b2d83facd2e6eea82d083363506db84e58fd397780a082e6488677,PodSandboxId:12845a5a3e725e5d5e850e8cee7abf3a2c53ffe30d17d5c
c68885cd02e1788c5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759144645109452267,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t8rkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214ad880-85fe-4b9e-9ff8-356871df65cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c5fbab44af81c4dea6e51856ccfdeaad44858b1c1744d2f22481155bbda9a5,PodSandboxId:f4dcd22
5d638446e50ceb0ccf741b6a0504ae62d6ebc161ede5558d498a14e52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759144639632374621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5abf2f6-fbaa-408e-8e41-ecf4a4e30109,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcc403c4338ff51f0879576616b0c93da2117f7969918d239a76945b99120cc,PodSandboxId:54881e5ddcf910e46f0
880e68530256ca183f5bf2650df83cfe7dd6b258d43ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759144632845837392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-285tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 890f5f5e-f938-4859-911a-76df4c079c7d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a456f80188b9d9b7a27e5d6b081ce7917e59d3329af8828f220cff7d56cebf44,PodSandboxId:ff210ffabfa7d461ce50a49794fb00cdf653d09f094381c6240dc1559d3b3b25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759144631983430744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed0693f-15ae-42ed-9bfa-8d992b2cd1ad,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8eeaf40c58444600e020b4594b4efcc397302c41ce9e0f8190c48656ed728a3,PodSandboxId:16ad32a0a7ec693ad837a42a8f77cc611709cb3acb291181866283dec110590b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759144620357386078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24aec056b5132d45b8d4224be2541560,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354bbdf20e78d02d960b753a77b22181f5cb1f9beafe5bda721b1ca79b430244,PodSandboxId:ea8fce42df9367e0bc5623b4d06f99fd35751d684525c018a27020130ee596fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759144620328476038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed6dad664cebb9727e54227fcbbfc9a3,},
Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe18edfe1adedb4b2aa5743d4b3c1c687272f06849f803da298f3711de67371,PodSandboxId:d51baff335b082f6e9f863b3c2e4c18c9fbbfb325b8fb2d40d6972522b095213,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759144620132009955,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-
965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa070a0b54dff8a41f1a153d4e5fd880,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9e5ef64cbfc127e110f63224c339f0faeb53e3ea626bef5666375b8adfdee,PodSandboxId:8f1e2d001262f928005144a873f089af9e897b4baab66bdad107d03c7bfe0189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759144619991729289,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84885c20c54ae697f1810819f1ed4653,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f81e10fd-cc99-4ddf-a33c-5bd236e3c620 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.990922643Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Sep 29 11:22:53 addons-965504 crio[816]: time="2025-09-29 11:22:53.991347804Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Sep 29 11:22:54 addons-965504 crio[816]: time="2025-09-29 11:22:54.028398962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=843f39eb-b14a-40a7-820c-9024f0cb6b27 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:22:54 addons-965504 crio[816]: time="2025-09-29 11:22:54.028493714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=843f39eb-b14a-40a7-820c-9024f0cb6b27 name=/runtime.v1.RuntimeService/Version
	Sep 29 11:22:54 addons-965504 crio[816]: time="2025-09-29 11:22:54.029659010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbd64442-ce75-45c6-855e-5a45fb64a5d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:22:54 addons-965504 crio[816]: time="2025-09-29 11:22:54.031007843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759144974030975171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596878,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbd64442-ce75-45c6-855e-5a45fb64a5d5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 11:22:54 addons-965504 crio[816]: time="2025-09-29 11:22:54.031566613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dad5ec86-60ea-4275-8ef4-f637a28ebb53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:54 addons-965504 crio[816]: time="2025-09-29 11:22:54.031926028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dad5ec86-60ea-4275-8ef4-f637a28ebb53 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 11:22:54 addons-965504 crio[816]: time="2025-09-29 11:22:54.032566715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7db5fa5509ceaf2ad01a36effe85f7d0ef3c1bc4403a386e8d43d02f3d70d4dc,PodSandboxId:dc99bde55006be2b4b0f830fa10ae83fe1fb14730b97ca21424282a683865cf8,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759144831102179330,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c1df5c9-5d1d-4ca1-8e0c-f071fa132701,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c5c1ae64e18f9bf9ba3d196622188e6034a3eabf4dbbf398428d00ce9981f93,PodSandboxId:809d3b7734cb7abc27842decdff6f87bfd70d2265d4197d7e4afc0dcb4c5a16e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759144789812652581,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4801144-474a-40cc-9c33-ddafb69eddc6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b18e0d1e0e19335de15b574ac86a346d5455f80225b8eb1db77267a01cfd3d79,PodSandboxId:60b4f421aa0d51dcacd4b0a62990361c5516ef7936caf4859baa80c8bf69d10e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759144717200670843,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6pnrv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78e710f9-2e82-4c6f-964d-678b68382cba,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d99ef039dac39f971b53b57b2920bee0ef2dc5bdcce8c1b698551b610ecbf120,PodSandboxId:a016d6ac2cbf052a0d74be8ff3f0c980b1e6c1cd173ef9b9405b93d3548e3692,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702999402774,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j7szj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 65199dac-07ab-4593-9940-382a7e7f269a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb97f5e2b75bbba42614552184fafd2df8012db2ca4b253c996de5fc5222fab,PodSandboxId:7b1210579eebffaaa6d8fed5ea662ffffca3a3060db1080dc8a9a28f1657fbfb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759144702866828650,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8js7x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 20e97369-7b0e-48c1-be26-319804621cdb,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9c94169ef6b410a52a0f7664c49f3c6ebcf3141a3e10b4ee1feda14c6f48f4,PodSandboxId:33bf84dabeb8163787de677cdbf9e2f8cf84a277f50e80c1e205c19d018f60d0,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759144688245180912,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z5qw5,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: df216355-05ab-43a4-8442-3cd9730f5c17,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3978b9603d78200518c6e7e1bbc414220ad80f010b44921652dbcdea25d3823d,PodSandboxId:c06748929a6d22a47e8f33300842cd6b11e3152f3cdca1e96f8ff41858082d61,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759144679330042609,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410e0153-dff2-4ae4-8f05-c77978d36332,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3caa0d48f1b2d83facd2e6eea82d083363506db84e58fd397780a082e6488677,PodSandboxId:12845a5a3e725e5d5e850e8cee7abf3a2c53ffe30d17d5c
c68885cd02e1788c5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759144645109452267,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t8rkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214ad880-85fe-4b9e-9ff8-356871df65cf,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74c5fbab44af81c4dea6e51856ccfdeaad44858b1c1744d2f22481155bbda9a5,PodSandboxId:f4dcd22
5d638446e50ceb0ccf741b6a0504ae62d6ebc161ede5558d498a14e52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759144639632374621,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5abf2f6-fbaa-408e-8e41-ecf4a4e30109,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbcc403c4338ff51f0879576616b0c93da2117f7969918d239a76945b99120cc,PodSandboxId:54881e5ddcf910e46f0
880e68530256ca183f5bf2650df83cfe7dd6b258d43ae,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759144632845837392,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-285tb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 890f5f5e-f938-4859-911a-76df4c079c7d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a456f80188b9d9b7a27e5d6b081ce7917e59d3329af8828f220cff7d56cebf44,PodSandboxId:ff210ffabfa7d461ce50a49794fb00cdf653d09f094381c6240dc1559d3b3b25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1759144631983430744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhkpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eed0693f-15ae-42ed-9bfa-8d992b2cd1ad,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8eeaf40c58444600e020b4594b4efcc397302c41ce9e0f8190c48656ed728a3,PodSandboxId:16ad32a0a7ec693ad837a42a8f77cc611709cb3acb291181866283dec110590b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759144620357386078,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24aec056b5132d45b8d4224be2541560,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354bbdf20e78d02d960b753a77b22181f5cb1f9beafe5bda721b1ca79b430244,PodSandboxId:ea8fce42df9367e0bc5623b4d06f99fd35751d684525c018a27020130ee596fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759144620328476038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed6dad664cebb9727e54227fcbbfc9a3,},
Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe18edfe1adedb4b2aa5743d4b3c1c687272f06849f803da298f3711de67371,PodSandboxId:d51baff335b082f6e9f863b3c2e4c18c9fbbfb325b8fb2d40d6972522b095213,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759144620132009955,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-
965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa070a0b54dff8a41f1a153d4e5fd880,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9e5ef64cbfc127e110f63224c339f0faeb53e3ea626bef5666375b8adfdee,PodSandboxId:8f1e2d001262f928005144a873f089af9e897b4baab66bdad107d03c7bfe0189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759144619991729289,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-965504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84885c20c54ae697f1810819f1ed4653,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dad5ec86-60ea-4275-8ef4-f637a28ebb53 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7db5fa5509cea       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   dc99bde55006b       nginx
	7c5c1ae64e18f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   809d3b7734cb7       busybox
	b18e0d1e0e193       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago       Running             controller                0                   60b4f421aa0d5       ingress-nginx-controller-9cc49f96f-6pnrv
	d99ef039dac39       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              patch                     0                   a016d6ac2cbf0       ingress-nginx-admission-patch-j7szj
	efb97f5e2b75b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   7b1210579eebf       ingress-nginx-admission-create-8js7x
	4e9c94169ef6b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago       Running             gadget                    0                   33bf84dabeb81       gadget-z5qw5
	3978b9603d782       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   c06748929a6d2       kube-ingress-dns-minikube
	3caa0d48f1b2d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   12845a5a3e725       amd-gpu-device-plugin-t8rkt
	74c5fbab44af8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   f4dcd225d6384       storage-provisioner
	fbcc403c4338f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   54881e5ddcf91       coredns-66bc5c9577-285tb
	a456f80188b9d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   ff210ffabfa7d       kube-proxy-dhkpx
	c8eeaf40c5844       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   16ad32a0a7ec6       etcd-addons-965504
	354bbdf20e78d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   ea8fce42df936       kube-controller-manager-addons-965504
	7fe18edfe1ade       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   d51baff335b08       kube-scheduler-addons-965504
	acf9e5ef64cbf       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   8f1e2d001262f       kube-apiserver-addons-965504
	
	
	==> coredns [fbcc403c4338ff51f0879576616b0c93da2117f7969918d239a76945b99120cc] <==
	[INFO] 10.244.0.8:50413 - 28949 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002714195s
	[INFO] 10.244.0.8:50413 - 49715 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000169694s
	[INFO] 10.244.0.8:50413 - 6531 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000184353s
	[INFO] 10.244.0.8:50413 - 12689 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000077293s
	[INFO] 10.244.0.8:50413 - 36950 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000137905s
	[INFO] 10.244.0.8:50413 - 20765 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000177421s
	[INFO] 10.244.0.8:50413 - 46060 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124487s
	[INFO] 10.244.0.8:46352 - 23366 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133038s
	[INFO] 10.244.0.8:46352 - 23058 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000192136s
	[INFO] 10.244.0.8:47415 - 24873 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093945s
	[INFO] 10.244.0.8:47415 - 25145 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000339311s
	[INFO] 10.244.0.8:58734 - 36730 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086477s
	[INFO] 10.244.0.8:58734 - 36488 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074816s
	[INFO] 10.244.0.8:54238 - 41876 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00014846s
	[INFO] 10.244.0.8:54238 - 41645 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077203s
	[INFO] 10.244.0.23:45240 - 46215 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000473514s
	[INFO] 10.244.0.23:36475 - 63788 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00044654s
	[INFO] 10.244.0.23:39837 - 7752 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094406s
	[INFO] 10.244.0.23:57538 - 7740 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000373065s
	[INFO] 10.244.0.23:41240 - 39458 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128003s
	[INFO] 10.244.0.23:57162 - 51921 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000415083s
	[INFO] 10.244.0.23:39853 - 23926 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.003399582s
	[INFO] 10.244.0.23:48053 - 50894 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004394602s
	[INFO] 10.244.0.28:47172 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000352617s
	[INFO] 10.244.0.28:52431 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000166782s
	
	
	==> describe nodes <==
	Name:               addons-965504
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-965504
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=addons-965504
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_17_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-965504
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:17:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-965504
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:22:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:21:12 +0000   Mon, 29 Sep 2025 11:17:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:21:12 +0000   Mon, 29 Sep 2025 11:17:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:21:12 +0000   Mon, 29 Sep 2025 11:17:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:21:12 +0000   Mon, 29 Sep 2025 11:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    addons-965504
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008592Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddaa83d679034b5b84baab28b2108f14
	  System UUID:                ddaa83d6-7903-4b5b-84ba-ab28b2108f14
	  Boot ID:                    734bd429-ac13-4605-84c8-7029106b3c9c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-h4srd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-z5qw5                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-6pnrv    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m35s
	  kube-system                 amd-gpu-device-plugin-t8rkt                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 coredns-66bc5c9577-285tb                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m43s
	  kube-system                 etcd-addons-965504                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m49s
	  kube-system                 kube-apiserver-addons-965504                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 kube-controller-manager-addons-965504       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-proxy-dhkpx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-scheduler-addons-965504                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  Starting                 5m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node addons-965504 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node addons-965504 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet          Node addons-965504 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m48s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m48s                  kubelet          Node addons-965504 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m48s                  kubelet          Node addons-965504 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m48s                  kubelet          Node addons-965504 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m47s                  kubelet          Node addons-965504 status is now: NodeReady
	  Normal  RegisteredNode           5m44s                  node-controller  Node addons-965504 event: Registered Node addons-965504 in Controller
	
	
	==> dmesg <==
	[ +15.939038] kauditd_printk_skb: 368 callbacks suppressed
	[  +6.396781] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.126442] kauditd_printk_skb: 32 callbacks suppressed
	[Sep29 11:18] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.371239] kauditd_printk_skb: 17 callbacks suppressed
	[ +10.401668] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.011867] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.949134] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.066623] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.417348] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.752511] kauditd_printk_skb: 20 callbacks suppressed
	[Sep29 11:19] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 47 callbacks suppressed
	[ +12.099461] kauditd_printk_skb: 41 callbacks suppressed
	[Sep29 11:20] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.185902] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.514468] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.223386] kauditd_printk_skb: 167 callbacks suppressed
	[  +0.000106] kauditd_printk_skb: 99 callbacks suppressed
	[  +1.470745] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.716580] kauditd_printk_skb: 46 callbacks suppressed
	[Sep29 11:21] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000170] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.852555] kauditd_printk_skb: 41 callbacks suppressed
	[Sep29 11:22] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [c8eeaf40c58444600e020b4594b4efcc397302c41ce9e0f8190c48656ed728a3] <==
	{"level":"info","ts":"2025-09-29T11:18:39.553894Z","caller":"traceutil/trace.go:172","msg":"trace[297152822] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"186.491268ms","start":"2025-09-29T11:18:39.367395Z","end":"2025-09-29T11:18:39.553887Z","steps":["trace[297152822] 'process raft request'  (duration: 186.403487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:18:39.559094Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.72442ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:18:39.559161Z","caller":"traceutil/trace.go:172","msg":"trace[783510210] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1162; }","duration":"137.803204ms","start":"2025-09-29T11:18:39.421346Z","end":"2025-09-29T11:18:39.559149Z","steps":["trace[783510210] 'agreement among raft nodes before linearized reading'  (duration: 137.697514ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:18:39.562187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.096355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:18:39.563739Z","caller":"traceutil/trace.go:172","msg":"trace[1783958157] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1163; }","duration":"115.488171ms","start":"2025-09-29T11:18:39.448074Z","end":"2025-09-29T11:18:39.563562Z","steps":["trace[1783958157] 'agreement among raft nodes before linearized reading'  (duration: 113.327412ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:19:15.425169Z","caller":"traceutil/trace.go:172","msg":"trace[2018782371] linearizableReadLoop","detail":"{readStateIndex:1299; appliedIndex:1299; }","duration":"224.342461ms","start":"2025-09-29T11:19:15.200785Z","end":"2025-09-29T11:19:15.425128Z","steps":["trace[2018782371] 'read index received'  (duration: 224.337007ms)","trace[2018782371] 'applied index is now lower than readState.Index'  (duration: 4.648µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:19:15.425410Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.592827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.82\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-09-29T11:19:15.425461Z","caller":"traceutil/trace.go:172","msg":"trace[491627902] range","detail":"{range_begin:/registry/masterleases/192.168.39.82; range_end:; response_count:1; response_revision:1252; }","duration":"224.681513ms","start":"2025-09-29T11:19:15.200768Z","end":"2025-09-29T11:19:15.425449Z","steps":["trace[491627902] 'agreement among raft nodes before linearized reading'  (duration: 224.515497ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:19:15.425811Z","caller":"traceutil/trace.go:172","msg":"trace[1587135177] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"279.170361ms","start":"2025-09-29T11:19:15.146632Z","end":"2025-09-29T11:19:15.425802Z","steps":["trace[1587135177] 'process raft request'  (duration: 278.803121ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:20:13.464287Z","caller":"traceutil/trace.go:172","msg":"trace[312794108] linearizableReadLoop","detail":"{readStateIndex:1541; appliedIndex:1541; }","duration":"106.498893ms","start":"2025-09-29T11:20:13.357745Z","end":"2025-09-29T11:20:13.464244Z","steps":["trace[312794108] 'read index received'  (duration: 106.493554ms)","trace[312794108] 'applied index is now lower than readState.Index'  (duration: 4.565µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T11:20:13.464454Z","caller":"traceutil/trace.go:172","msg":"trace[451744622] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"133.270064ms","start":"2025-09-29T11:20:13.331169Z","end":"2025-09-29T11:20:13.464439Z","steps":["trace[451744622] 'process raft request'  (duration: 133.149893ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:20:13.465387Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.260391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1837"}
	{"level":"info","ts":"2025-09-29T11:20:13.465500Z","caller":"traceutil/trace.go:172","msg":"trace[1418230023] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1478; }","duration":"105.371789ms","start":"2025-09-29T11:20:13.360110Z","end":"2025-09-29T11:20:13.465482Z","steps":["trace[1418230023] 'agreement among raft nodes before linearized reading'  (duration: 105.147566ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:20:13.464758Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.969311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:20:13.465663Z","caller":"traceutil/trace.go:172","msg":"trace[862810645] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:1478; }","duration":"107.915814ms","start":"2025-09-29T11:20:13.357741Z","end":"2025-09-29T11:20:13.465657Z","steps":["trace[862810645] 'agreement among raft nodes before linearized reading'  (duration: 106.940276ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:20:20.114763Z","caller":"traceutil/trace.go:172","msg":"trace[1557386403] linearizableReadLoop","detail":"{readStateIndex:1589; appliedIndex:1589; }","duration":"254.550089ms","start":"2025-09-29T11:20:19.860189Z","end":"2025-09-29T11:20:20.114739Z","steps":["trace[1557386403] 'read index received'  (duration: 254.543655ms)","trace[1557386403] 'applied index is now lower than readState.Index'  (duration: 5.331µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:20:20.114961Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.768218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-09-29T11:20:20.115000Z","caller":"traceutil/trace.go:172","msg":"trace[783953766] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1524; }","duration":"254.821481ms","start":"2025-09-29T11:20:19.860170Z","end":"2025-09-29T11:20:20.114991Z","steps":["trace[783953766] 'agreement among raft nodes before linearized reading'  (duration: 254.683635ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:20:20.115245Z","caller":"traceutil/trace.go:172","msg":"trace[85666336] transaction","detail":"{read_only:false; response_revision:1525; number_of_response:1; }","duration":"320.508002ms","start":"2025-09-29T11:20:19.794729Z","end":"2025-09-29T11:20:20.115237Z","steps":["trace[85666336] 'process raft request'  (duration: 320.22714ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:20:20.115353Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.032101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-29T11:20:20.115390Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-29T11:20:19.794712Z","time spent":"320.572525ms","remote":"127.0.0.1:32768","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3809,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/test-local-path\" mod_revision:1524 > success:<request_put:<key:\"/registry/pods/default/test-local-path\" value_size:3763 >> failure:<request_range:<key:\"/registry/pods/default/test-local-path\" > >"}
	{"level":"info","ts":"2025-09-29T11:20:20.115421Z","caller":"traceutil/trace.go:172","msg":"trace[216524853] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1525; }","duration":"246.15422ms","start":"2025-09-29T11:20:19.869228Z","end":"2025-09-29T11:20:20.115382Z","steps":["trace[216524853] 'agreement among raft nodes before linearized reading'  (duration: 245.983903ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:20:20.115624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.762962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:20:20.115669Z","caller":"traceutil/trace.go:172","msg":"trace[808605639] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1525; }","duration":"115.83401ms","start":"2025-09-29T11:20:19.999805Z","end":"2025-09-29T11:20:20.115639Z","steps":["trace[808605639] 'agreement among raft nodes before linearized reading'  (duration: 115.746338ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T11:21:02.291261Z","caller":"traceutil/trace.go:172","msg":"trace[1922562365] transaction","detail":"{read_only:false; response_revision:1766; number_of_response:1; }","duration":"192.152282ms","start":"2025-09-29T11:21:02.099094Z","end":"2025-09-29T11:21:02.291246Z","steps":["trace[1922562365] 'process raft request'  (duration: 192.042464ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:22:54 up 6 min,  0 users,  load average: 1.26, 1.54, 0.80
	Linux addons-965504 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [acf9e5ef64cbfc127e110f63224c339f0faeb53e3ea626bef5666375b8adfdee] <==
	E0929 11:19:55.589791       1 conn.go:339] Error on socket receive: read tcp 192.168.39.82:8443->192.168.39.1:56258: use of closed network connection
	I0929 11:20:05.092181       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.9.126"}
	I0929 11:20:26.596482       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 11:20:26.777549       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.178.177"}
	E0929 11:20:36.379755       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0929 11:20:48.354654       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:21:01.124802       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 11:21:03.260915       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 11:21:13.023311       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:21:19.430690       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:21:19.430824       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:21:19.461381       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:21:19.461435       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:21:19.469869       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:21:19.469914       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:21:19.503371       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:21:19.503423       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:21:19.526890       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:21:19.526969       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 11:21:20.470364       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 11:21:20.527546       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 11:21:20.555376       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 11:21:54.693824       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:16.177243       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:52.693810       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.106.22"}
	
	
	==> kube-controller-manager [354bbdf20e78d02d960b753a77b22181f5cb1f9beafe5bda721b1ca79b430244] <==
	E0929 11:21:28.203412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:21:29.931510       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:21:29.933189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:21:35.699942       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:21:35.700913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:21:40.467547       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:21:40.468805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0929 11:21:40.497962       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 11:21:40.498014       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:21:40.559079       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 11:21:40.559200       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 11:21:41.236351       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:21:41.237413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:21:55.823556       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:21:55.824542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:21:59.875420       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:21:59.876779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:21:59.926988       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:21:59.928119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:22:32.447724       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:22:32.449018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:22:40.439126       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:22:40.440686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:22:44.049436       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:22:44.050496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a456f80188b9d9b7a27e5d6b081ce7917e59d3329af8828f220cff7d56cebf44] <==
	I0929 11:17:12.528485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:17:12.629820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:17:12.629994       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.82"]
	E0929 11:17:12.630333       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:17:12.870463       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 11:17:12.870529       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 11:17:12.870558       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:17:12.892700       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:17:12.893016       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:17:12.893218       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:17:12.899053       1 config.go:200] "Starting service config controller"
	I0929 11:17:12.899085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:17:12.899123       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:17:12.899128       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:17:12.899139       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:17:12.899142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:17:12.912944       1 config.go:309] "Starting node config controller"
	I0929 11:17:12.912975       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:17:12.913883       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:17:12.999912       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:17:13.000004       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:17:13.000644       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7fe18edfe1adedb4b2aa5743d4b3c1c687272f06849f803da298f3711de67371] <==
	E0929 11:17:03.441426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:17:03.441530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:17:03.441643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:17:03.441908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:17:03.442309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:17:03.442984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:17:03.443725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:17:03.443816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:17:03.443895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:17:03.443983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:17:03.444111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:17:04.279341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:17:04.344730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:17:04.411077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:17:04.431029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:17:04.536701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:17:04.537690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:17:04.553402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:17:04.570563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:17:04.619870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:17:04.655377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:17:04.720742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:17:04.748658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:17:04.964695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 11:17:07.824649       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:21:22 addons-965504 kubelet[1512]: I0929 11:21:22.480447    1512 scope.go:117] "RemoveContainer" containerID="ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057"
	Sep 29 11:21:22 addons-965504 kubelet[1512]: I0929 11:21:22.594462    1512 scope.go:117] "RemoveContainer" containerID="ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057"
	Sep 29 11:21:22 addons-965504 kubelet[1512]: E0929 11:21:22.595058    1512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057\": container with ID starting with ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057 not found: ID does not exist" containerID="ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057"
	Sep 29 11:21:22 addons-965504 kubelet[1512]: I0929 11:21:22.595101    1512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057"} err="failed to get container status \"ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057\": rpc error: code = NotFound desc = could not find container \"ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057\": container with ID starting with ab396ea897c2625ad2adcffc9ccee4a8a31263044f4734af9be40827f5e84057 not found: ID does not exist"
	Sep 29 11:21:26 addons-965504 kubelet[1512]: E0929 11:21:26.381395    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144886381068466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:21:26 addons-965504 kubelet[1512]: E0929 11:21:26.381433    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144886381068466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:21:36 addons-965504 kubelet[1512]: E0929 11:21:36.384172    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144896383556841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:21:36 addons-965504 kubelet[1512]: E0929 11:21:36.384503    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144896383556841  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:21:46 addons-965504 kubelet[1512]: E0929 11:21:46.387912    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144906387418356  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:21:46 addons-965504 kubelet[1512]: E0929 11:21:46.387955    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144906387418356  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:21:56 addons-965504 kubelet[1512]: E0929 11:21:56.391684    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144916391179795  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:21:56 addons-965504 kubelet[1512]: E0929 11:21:56.391711    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144916391179795  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:06 addons-965504 kubelet[1512]: E0929 11:22:06.395217    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144926394455287  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:06 addons-965504 kubelet[1512]: E0929 11:22:06.395243    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144926394455287  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:16 addons-965504 kubelet[1512]: E0929 11:22:16.398631    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144936398201038  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:16 addons-965504 kubelet[1512]: E0929 11:22:16.398662    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144936398201038  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:24 addons-965504 kubelet[1512]: I0929 11:22:24.174772    1512 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:22:26 addons-965504 kubelet[1512]: E0929 11:22:26.401075    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144946400632559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:26 addons-965504 kubelet[1512]: E0929 11:22:26.401104    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144946400632559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:30 addons-965504 kubelet[1512]: I0929 11:22:30.169938    1512 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t8rkt" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:22:36 addons-965504 kubelet[1512]: E0929 11:22:36.405284    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144956403954648  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:36 addons-965504 kubelet[1512]: E0929 11:22:36.405313    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144956403954648  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:46 addons-965504 kubelet[1512]: E0929 11:22:46.407884    1512 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144966407377557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:46 addons-965504 kubelet[1512]: E0929 11:22:46.407924    1512 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144966407377557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596878}  inodes_used:{value:201}}"
	Sep 29 11:22:52 addons-965504 kubelet[1512]: I0929 11:22:52.729312    1512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2kt\" (UniqueName: \"kubernetes.io/projected/bb55bc42-a1ae-422a-867d-a30b5eaa3d7a-kube-api-access-ld2kt\") pod \"hello-world-app-5d498dc89-h4srd\" (UID: \"bb55bc42-a1ae-422a-867d-a30b5eaa3d7a\") " pod="default/hello-world-app-5d498dc89-h4srd"
	
	
	==> storage-provisioner [74c5fbab44af81c4dea6e51856ccfdeaad44858b1c1744d2f22481155bbda9a5] <==
	W0929 11:22:28.957713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:30.961627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:30.967138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:32.970549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:32.976525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:34.981639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:34.987337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:36.992261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:36.999805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:39.003337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:39.011014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:41.014333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:41.019049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:43.023481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:43.032085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:45.035194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:45.041897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:47.046886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:47.057195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:49.060648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:49.065267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:51.068487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:51.074288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:53.080897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:22:53.092133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-965504 -n addons-965504
helpers_test.go:269: (dbg) Run:  kubectl --context addons-965504 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-h4srd ingress-nginx-admission-create-8js7x ingress-nginx-admission-patch-j7szj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-965504 describe pod hello-world-app-5d498dc89-h4srd ingress-nginx-admission-create-8js7x ingress-nginx-admission-patch-j7szj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-965504 describe pod hello-world-app-5d498dc89-h4srd ingress-nginx-admission-create-8js7x ingress-nginx-admission-patch-j7szj: exit status 1 (87.579484ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-h4srd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-965504/192.168.39.82
	Start Time:       Mon, 29 Sep 2025 11:22:52 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ld2kt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ld2kt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-h4srd to addons-965504
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8js7x" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-j7szj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-965504 describe pod hello-world-app-5d498dc89-h4srd ingress-nginx-admission-create-8js7x ingress-nginx-admission-patch-j7szj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 addons disable ingress-dns --alsologtostderr -v=1: (1.144196413s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 addons disable ingress --alsologtostderr -v=1: (7.846338598s)
--- FAIL: TestAddons/parallel/Ingress (158.08s)

                                                
                                    
x
+
TestPreload (133.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-547438 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E0929 12:09:46.269119  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-547438 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m5.609136991s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547438 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-547438 image pull gcr.io/k8s-minikube/busybox: (3.244802769s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-547438
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-547438: (8.21294487s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-547438 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-547438 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.678182577s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547438 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-29 12:11:28.879919427 +0000 UTC m=+3339.321830746
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-547438 -n test-preload-547438
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-547438 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-547438 logs -n 25: (1.088447265s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-404520 ssh -n multinode-404520-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:57 UTC │ 29 Sep 25 11:57 UTC │
	│ ssh     │ multinode-404520 ssh -n multinode-404520 sudo cat /home/docker/cp-test_multinode-404520-m03_multinode-404520.txt                                                                    │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:57 UTC │ 29 Sep 25 11:57 UTC │
	│ cp      │ multinode-404520 cp multinode-404520-m03:/home/docker/cp-test.txt multinode-404520-m02:/home/docker/cp-test_multinode-404520-m03_multinode-404520-m02.txt                           │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:57 UTC │ 29 Sep 25 11:57 UTC │
	│ ssh     │ multinode-404520 ssh -n multinode-404520-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:57 UTC │ 29 Sep 25 11:57 UTC │
	│ ssh     │ multinode-404520 ssh -n multinode-404520-m02 sudo cat /home/docker/cp-test_multinode-404520-m03_multinode-404520-m02.txt                                                            │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:57 UTC │ 29 Sep 25 11:57 UTC │
	│ node    │ multinode-404520 node stop m03                                                                                                                                                      │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:57 UTC │ 29 Sep 25 11:57 UTC │
	│ node    │ multinode-404520 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:57 UTC │ 29 Sep 25 11:58 UTC │
	│ node    │ list -p multinode-404520                                                                                                                                                            │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:58 UTC │                     │
	│ stop    │ -p multinode-404520                                                                                                                                                                 │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 11:58 UTC │ 29 Sep 25 12:01 UTC │
	│ start   │ -p multinode-404520 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:01 UTC │ 29 Sep 25 12:04 UTC │
	│ node    │ list -p multinode-404520                                                                                                                                                            │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:04 UTC │                     │
	│ node    │ multinode-404520 node delete m03                                                                                                                                                    │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:04 UTC │ 29 Sep 25 12:04 UTC │
	│ stop    │ multinode-404520 stop                                                                                                                                                               │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:04 UTC │ 29 Sep 25 12:07 UTC │
	│ start   │ -p multinode-404520 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:07 UTC │ 29 Sep 25 12:08 UTC │
	│ node    │ list -p multinode-404520                                                                                                                                                            │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:08 UTC │                     │
	│ start   │ -p multinode-404520-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-404520-m02 │ jenkins │ v1.37.0 │ 29 Sep 25 12:08 UTC │                     │
	│ start   │ -p multinode-404520-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-404520-m03 │ jenkins │ v1.37.0 │ 29 Sep 25 12:08 UTC │ 29 Sep 25 12:09 UTC │
	│ node    │ add -p multinode-404520                                                                                                                                                             │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:09 UTC │                     │
	│ delete  │ -p multinode-404520-m03                                                                                                                                                             │ multinode-404520-m03 │ jenkins │ v1.37.0 │ 29 Sep 25 12:09 UTC │ 29 Sep 25 12:09 UTC │
	│ delete  │ -p multinode-404520                                                                                                                                                                 │ multinode-404520     │ jenkins │ v1.37.0 │ 29 Sep 25 12:09 UTC │ 29 Sep 25 12:09 UTC │
	│ start   │ -p test-preload-547438 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-547438  │ jenkins │ v1.37.0 │ 29 Sep 25 12:09 UTC │ 29 Sep 25 12:10 UTC │
	│ image   │ test-preload-547438 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-547438  │ jenkins │ v1.37.0 │ 29 Sep 25 12:10 UTC │ 29 Sep 25 12:10 UTC │
	│ stop    │ -p test-preload-547438                                                                                                                                                              │ test-preload-547438  │ jenkins │ v1.37.0 │ 29 Sep 25 12:10 UTC │ 29 Sep 25 12:10 UTC │
	│ start   │ -p test-preload-547438 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-547438  │ jenkins │ v1.37.0 │ 29 Sep 25 12:10 UTC │ 29 Sep 25 12:11 UTC │
	│ image   │ test-preload-547438 image list                                                                                                                                                      │ test-preload-547438  │ jenkins │ v1.37.0 │ 29 Sep 25 12:11 UTC │ 29 Sep 25 12:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:10:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:10:35.028608  400750 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:10:35.028927  400750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:10:35.028938  400750 out.go:374] Setting ErrFile to fd 2...
	I0929 12:10:35.028945  400750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:10:35.029207  400750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 12:10:35.029706  400750 out.go:368] Setting JSON to false
	I0929 12:10:35.030681  400750 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6777,"bootTime":1759141058,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:10:35.030765  400750 start.go:140] virtualization: kvm guest
	I0929 12:10:35.032911  400750 out.go:179] * [test-preload-547438] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:10:35.034271  400750 notify.go:220] Checking for updates...
	I0929 12:10:35.034344  400750 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:10:35.035817  400750 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:10:35.037139  400750 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:10:35.038377  400750 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 12:10:35.039661  400750 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:10:35.040928  400750 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:10:35.042660  400750 config.go:182] Loaded profile config "test-preload-547438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 12:10:35.043248  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:10:35.043338  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:10:35.057177  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0929 12:10:35.057684  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:10:35.058276  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:10:35.058305  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:10:35.058851  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:10:35.059079  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:10:35.060845  400750 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0929 12:10:35.062235  400750 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:10:35.062565  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:10:35.062607  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:10:35.076431  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0929 12:10:35.077101  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:10:35.077602  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:10:35.077627  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:10:35.078025  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:10:35.078215  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:10:35.113325  400750 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 12:10:35.114710  400750 start.go:304] selected driver: kvm2
	I0929 12:10:35.114730  400750 start.go:924] validating driver "kvm2" against &{Name:test-preload-547438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:test-preload-547438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:10:35.114844  400750 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:10:35.115521  400750 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:10:35.115636  400750 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 12:10:35.130292  400750 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 12:10:35.130325  400750 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 12:10:35.143933  400750 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 12:10:35.144353  400750 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:10:35.144392  400750 cni.go:84] Creating CNI manager for ""
	I0929 12:10:35.144433  400750 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:10:35.144496  400750 start.go:348] cluster config:
	{Name:test-preload-547438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-547438 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:10:35.144599  400750 iso.go:125] acquiring lock: {Name:mkf6a4bd1628698e7eb4c42d44aa8328e64686d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:10:35.146216  400750 out.go:179] * Starting "test-preload-547438" primary control-plane node in "test-preload-547438" cluster
	I0929 12:10:35.147467  400750 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 12:10:35.238868  400750 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0929 12:10:35.238908  400750 cache.go:58] Caching tarball of preloaded images
	I0929 12:10:35.239122  400750 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 12:10:35.240853  400750 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0929 12:10:35.241889  400750 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 12:10:35.344290  400750 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0929 12:10:44.615001  400750 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 12:10:44.615111  400750 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 12:10:45.379633  400750 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0929 12:10:45.379766  400750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/config.json ...
	I0929 12:10:45.380817  400750 start.go:360] acquireMachinesLock for test-preload-547438: {Name:mk02e688f69f8dfa335651bd732d9d18b60c0952 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 12:10:45.380914  400750 start.go:364] duration metric: took 60.553µs to acquireMachinesLock for "test-preload-547438"
	I0929 12:10:45.380939  400750 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:10:45.380947  400750 fix.go:54] fixHost starting: 
	I0929 12:10:45.381326  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:10:45.381372  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:10:45.394985  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46419
	I0929 12:10:45.395574  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:10:45.396223  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:10:45.396250  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:10:45.396641  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:10:45.396837  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:10:45.397052  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetState
	I0929 12:10:45.399032  400750 fix.go:112] recreateIfNeeded on test-preload-547438: state=Stopped err=<nil>
	I0929 12:10:45.399059  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	W0929 12:10:45.399235  400750 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 12:10:45.486776  400750 out.go:252] * Restarting existing kvm2 VM for "test-preload-547438" ...
	I0929 12:10:45.486838  400750 main.go:141] libmachine: (test-preload-547438) Calling .Start
	I0929 12:10:45.487226  400750 main.go:141] libmachine: (test-preload-547438) starting domain...
	I0929 12:10:45.487254  400750 main.go:141] libmachine: (test-preload-547438) ensuring networks are active...
	I0929 12:10:45.488211  400750 main.go:141] libmachine: (test-preload-547438) Ensuring network default is active
	I0929 12:10:45.488609  400750 main.go:141] libmachine: (test-preload-547438) Ensuring network mk-test-preload-547438 is active
	I0929 12:10:45.489194  400750 main.go:141] libmachine: (test-preload-547438) getting domain XML...
	I0929 12:10:45.490186  400750 main.go:141] libmachine: (test-preload-547438) DBG | starting domain XML:
	I0929 12:10:45.490212  400750 main.go:141] libmachine: (test-preload-547438) DBG | <domain type='kvm'>
	I0929 12:10:45.490223  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <name>test-preload-547438</name>
	I0929 12:10:45.490232  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <uuid>85119d9d-5c7e-490a-a0ae-f2f497e9805b</uuid>
	I0929 12:10:45.490266  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <memory unit='KiB'>3145728</memory>
	I0929 12:10:45.490285  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0929 12:10:45.490297  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 12:10:45.490301  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <os>
	I0929 12:10:45.490308  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 12:10:45.490316  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <boot dev='cdrom'/>
	I0929 12:10:45.490327  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <boot dev='hd'/>
	I0929 12:10:45.490336  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <bootmenu enable='no'/>
	I0929 12:10:45.490344  400750 main.go:141] libmachine: (test-preload-547438) DBG |   </os>
	I0929 12:10:45.490352  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <features>
	I0929 12:10:45.490388  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <acpi/>
	I0929 12:10:45.490413  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <apic/>
	I0929 12:10:45.490445  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <pae/>
	I0929 12:10:45.490463  400750 main.go:141] libmachine: (test-preload-547438) DBG |   </features>
	I0929 12:10:45.490480  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 12:10:45.490491  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <clock offset='utc'/>
	I0929 12:10:45.490504  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 12:10:45.490515  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <on_reboot>restart</on_reboot>
	I0929 12:10:45.490528  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <on_crash>destroy</on_crash>
	I0929 12:10:45.490542  400750 main.go:141] libmachine: (test-preload-547438) DBG |   <devices>
	I0929 12:10:45.490556  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 12:10:45.490568  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <disk type='file' device='cdrom'>
	I0929 12:10:45.490579  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <driver name='qemu' type='raw'/>
	I0929 12:10:45.490595  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/boot2docker.iso'/>
	I0929 12:10:45.490608  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 12:10:45.490622  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <readonly/>
	I0929 12:10:45.490637  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 12:10:45.490647  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </disk>
	I0929 12:10:45.490657  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <disk type='file' device='disk'>
	I0929 12:10:45.490669  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 12:10:45.490688  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/test-preload-547438.rawdisk'/>
	I0929 12:10:45.490702  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <target dev='hda' bus='virtio'/>
	I0929 12:10:45.490717  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 12:10:45.490728  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </disk>
	I0929 12:10:45.490739  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 12:10:45.490760  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 12:10:45.490780  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </controller>
	I0929 12:10:45.490809  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 12:10:45.490823  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 12:10:45.490839  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 12:10:45.490853  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </controller>
	I0929 12:10:45.490864  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <interface type='network'>
	I0929 12:10:45.490890  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <mac address='52:54:00:a8:8e:81'/>
	I0929 12:10:45.490910  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <source network='mk-test-preload-547438'/>
	I0929 12:10:45.490926  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <model type='virtio'/>
	I0929 12:10:45.490940  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 12:10:45.490952  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </interface>
	I0929 12:10:45.490964  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <interface type='network'>
	I0929 12:10:45.490990  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <mac address='52:54:00:04:64:3d'/>
	I0929 12:10:45.491009  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <source network='default'/>
	I0929 12:10:45.491023  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <model type='virtio'/>
	I0929 12:10:45.491036  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 12:10:45.491049  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </interface>
	I0929 12:10:45.491060  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <serial type='pty'>
	I0929 12:10:45.491074  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <target type='isa-serial' port='0'>
	I0929 12:10:45.491088  400750 main.go:141] libmachine: (test-preload-547438) DBG |         <model name='isa-serial'/>
	I0929 12:10:45.491100  400750 main.go:141] libmachine: (test-preload-547438) DBG |       </target>
	I0929 12:10:45.491111  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </serial>
	I0929 12:10:45.491124  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <console type='pty'>
	I0929 12:10:45.491135  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <target type='serial' port='0'/>
	I0929 12:10:45.491144  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </console>
	I0929 12:10:45.491156  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <input type='mouse' bus='ps2'/>
	I0929 12:10:45.491168  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 12:10:45.491181  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <audio id='1' type='none'/>
	I0929 12:10:45.491193  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <memballoon model='virtio'>
	I0929 12:10:45.491208  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 12:10:45.491219  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </memballoon>
	I0929 12:10:45.491231  400750 main.go:141] libmachine: (test-preload-547438) DBG |     <rng model='virtio'>
	I0929 12:10:45.491245  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <backend model='random'>/dev/random</backend>
	I0929 12:10:45.491260  400750 main.go:141] libmachine: (test-preload-547438) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 12:10:45.491270  400750 main.go:141] libmachine: (test-preload-547438) DBG |     </rng>
	I0929 12:10:45.491280  400750 main.go:141] libmachine: (test-preload-547438) DBG |   </devices>
	I0929 12:10:45.491290  400750 main.go:141] libmachine: (test-preload-547438) DBG | </domain>
	I0929 12:10:45.491302  400750 main.go:141] libmachine: (test-preload-547438) DBG | 
	I0929 12:10:46.783910  400750 main.go:141] libmachine: (test-preload-547438) waiting for domain to start...
	I0929 12:10:46.785264  400750 main.go:141] libmachine: (test-preload-547438) domain is now running
	I0929 12:10:46.785295  400750 main.go:141] libmachine: (test-preload-547438) waiting for IP...
	I0929 12:10:46.786122  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:10:46.786624  400750 main.go:141] libmachine: (test-preload-547438) found domain IP: 192.168.39.143
	I0929 12:10:46.786662  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has current primary IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:10:46.786670  400750 main.go:141] libmachine: (test-preload-547438) reserving static IP address...
	I0929 12:10:46.787063  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "test-preload-547438", mac: "52:54:00:a8:8e:81", ip: "192.168.39.143"} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:09:33 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:10:46.787102  400750 main.go:141] libmachine: (test-preload-547438) DBG | skip adding static IP to network mk-test-preload-547438 - found existing host DHCP lease matching {name: "test-preload-547438", mac: "52:54:00:a8:8e:81", ip: "192.168.39.143"}
	I0929 12:10:46.787119  400750 main.go:141] libmachine: (test-preload-547438) reserved static IP address 192.168.39.143 for domain test-preload-547438
	I0929 12:10:46.787140  400750 main.go:141] libmachine: (test-preload-547438) waiting for SSH...
	I0929 12:10:46.787156  400750 main.go:141] libmachine: (test-preload-547438) DBG | Getting to WaitForSSH function...
	I0929 12:10:46.789159  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:10:46.789456  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:09:33 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:10:46.789486  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:10:46.789641  400750 main.go:141] libmachine: (test-preload-547438) DBG | Using SSH client type: external
	I0929 12:10:46.789688  400750 main.go:141] libmachine: (test-preload-547438) DBG | Using SSH private key: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa (-rw-------)
	I0929 12:10:46.789760  400750 main.go:141] libmachine: (test-preload-547438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 12:10:46.789784  400750 main.go:141] libmachine: (test-preload-547438) DBG | About to run SSH command:
	I0929 12:10:46.789813  400750 main.go:141] libmachine: (test-preload-547438) DBG | exit 0
	I0929 12:10:57.062689  400750 main.go:141] libmachine: (test-preload-547438) DBG | SSH cmd err, output: exit status 255: 
	I0929 12:10:57.062725  400750 main.go:141] libmachine: (test-preload-547438) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0929 12:10:57.062736  400750 main.go:141] libmachine: (test-preload-547438) DBG | command : exit 0
	I0929 12:10:57.062757  400750 main.go:141] libmachine: (test-preload-547438) DBG | err     : exit status 255
	I0929 12:10:57.062768  400750 main.go:141] libmachine: (test-preload-547438) DBG | output  : 
	I0929 12:11:00.064137  400750 main.go:141] libmachine: (test-preload-547438) DBG | Getting to WaitForSSH function...
	I0929 12:11:00.067500  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.068029  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.068074  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.068272  400750 main.go:141] libmachine: (test-preload-547438) DBG | Using SSH client type: external
	I0929 12:11:00.068303  400750 main.go:141] libmachine: (test-preload-547438) DBG | Using SSH private key: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa (-rw-------)
	I0929 12:11:00.068337  400750 main.go:141] libmachine: (test-preload-547438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 12:11:00.068360  400750 main.go:141] libmachine: (test-preload-547438) DBG | About to run SSH command:
	I0929 12:11:00.068412  400750 main.go:141] libmachine: (test-preload-547438) DBG | exit 0
	I0929 12:11:00.197987  400750 main.go:141] libmachine: (test-preload-547438) DBG | SSH cmd err, output: <nil>: 
	I0929 12:11:00.198441  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetConfigRaw
	I0929 12:11:00.199182  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetIP
	I0929 12:11:00.202042  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.202352  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.202386  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.202690  400750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/config.json ...
	I0929 12:11:00.202926  400750 machine.go:93] provisionDockerMachine start ...
	I0929 12:11:00.202948  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:00.203209  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:00.206032  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.206387  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.206420  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.206588  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:00.206798  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:00.206983  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:00.207118  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:00.207311  400750 main.go:141] libmachine: Using SSH client type: native
	I0929 12:11:00.207635  400750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0929 12:11:00.207654  400750 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:11:00.312560  400750 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0929 12:11:00.312595  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetMachineName
	I0929 12:11:00.312866  400750 buildroot.go:166] provisioning hostname "test-preload-547438"
	I0929 12:11:00.312895  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetMachineName
	I0929 12:11:00.313112  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:00.315945  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.316301  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.316332  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.316453  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:00.316688  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:00.316858  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:00.317078  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:00.317264  400750 main.go:141] libmachine: Using SSH client type: native
	I0929 12:11:00.317464  400750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0929 12:11:00.317476  400750 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-547438 && echo "test-preload-547438" | sudo tee /etc/hostname
	I0929 12:11:00.443163  400750 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-547438
	
	I0929 12:11:00.443197  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:00.446773  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.447247  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.447277  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.447482  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:00.447742  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:00.447919  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:00.448083  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:00.448246  400750 main.go:141] libmachine: Using SSH client type: native
	I0929 12:11:00.448444  400750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0929 12:11:00.448466  400750 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-547438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-547438/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-547438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:11:00.567563  400750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:11:00.567604  400750 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21655-365455/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-365455/.minikube}
	I0929 12:11:00.567634  400750 buildroot.go:174] setting up certificates
	I0929 12:11:00.567648  400750 provision.go:84] configureAuth start
	I0929 12:11:00.567665  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetMachineName
	I0929 12:11:00.568016  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetIP
	I0929 12:11:00.571782  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.572188  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.572223  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.572458  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:00.575139  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.575587  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.575617  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.575749  400750 provision.go:143] copyHostCerts
	I0929 12:11:00.575816  400750 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem, removing ...
	I0929 12:11:00.575857  400750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem
	I0929 12:11:00.575954  400750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem (1078 bytes)
	I0929 12:11:00.576091  400750 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem, removing ...
	I0929 12:11:00.576105  400750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem
	I0929 12:11:00.576149  400750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem (1123 bytes)
	I0929 12:11:00.576233  400750 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem, removing ...
	I0929 12:11:00.576244  400750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem
	I0929 12:11:00.576282  400750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem (1675 bytes)
	I0929 12:11:00.576364  400750 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem org=jenkins.test-preload-547438 san=[127.0.0.1 192.168.39.143 localhost minikube test-preload-547438]
	I0929 12:11:00.833524  400750 provision.go:177] copyRemoteCerts
	I0929 12:11:00.833608  400750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:11:00.833637  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:00.836840  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.837341  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:00.837366  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:00.837563  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:00.837784  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:00.838006  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:00.838153  400750 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa Username:docker}
	I0929 12:11:00.921574  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 12:11:00.950648  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0929 12:11:00.980266  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 12:11:01.009804  400750 provision.go:87] duration metric: took 442.135093ms to configureAuth
	I0929 12:11:01.009850  400750 buildroot.go:189] setting minikube options for container-runtime
	I0929 12:11:01.010075  400750 config.go:182] Loaded profile config "test-preload-547438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 12:11:01.010179  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:01.013614  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.014049  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:01.014088  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.014267  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:01.014500  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:01.014686  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:01.014862  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:01.015036  400750 main.go:141] libmachine: Using SSH client type: native
	I0929 12:11:01.015317  400750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0929 12:11:01.015340  400750 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 12:11:01.263220  400750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 12:11:01.263259  400750 machine.go:96] duration metric: took 1.060316085s to provisionDockerMachine
	I0929 12:11:01.263276  400750 start.go:293] postStartSetup for "test-preload-547438" (driver="kvm2")
	I0929 12:11:01.263291  400750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:11:01.263319  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:01.263659  400750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:11:01.263693  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:01.266666  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.267125  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:01.267154  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.267358  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:01.267535  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:01.267734  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:01.267870  400750 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa Username:docker}
	I0929 12:11:01.353319  400750 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:11:01.358399  400750 info.go:137] Remote host: Buildroot 2025.02
	I0929 12:11:01.358431  400750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/addons for local assets ...
	I0929 12:11:01.358536  400750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/files for local assets ...
	I0929 12:11:01.358641  400750 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem -> 3694232.pem in /etc/ssl/certs
	I0929 12:11:01.358766  400750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:11:01.370764  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem --> /etc/ssl/certs/3694232.pem (1708 bytes)
	I0929 12:11:01.403493  400750 start.go:296] duration metric: took 140.197717ms for postStartSetup
	I0929 12:11:01.403545  400750 fix.go:56] duration metric: took 16.022597556s for fixHost
	I0929 12:11:01.403574  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:01.406760  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.407130  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:01.407164  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.407330  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:01.407582  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:01.407811  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:01.407997  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:01.408151  400750 main.go:141] libmachine: Using SSH client type: native
	I0929 12:11:01.408365  400750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0929 12:11:01.408378  400750 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 12:11:01.512175  400750 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759147861.468425460
	
	I0929 12:11:01.512203  400750 fix.go:216] guest clock: 1759147861.468425460
	I0929 12:11:01.512214  400750 fix.go:229] Guest: 2025-09-29 12:11:01.46842546 +0000 UTC Remote: 2025-09-29 12:11:01.403550795 +0000 UTC m=+26.412074593 (delta=64.874665ms)
	I0929 12:11:01.512241  400750 fix.go:200] guest clock delta is within tolerance: 64.874665ms
	I0929 12:11:01.512248  400750 start.go:83] releasing machines lock for "test-preload-547438", held for 16.13132025s
	I0929 12:11:01.512274  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:01.512552  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetIP
	I0929 12:11:01.515809  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.516204  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:01.516233  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.516421  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:01.517041  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:01.517237  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:01.517355  400750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:11:01.517419  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:01.517434  400750 ssh_runner.go:195] Run: cat /version.json
	I0929 12:11:01.517461  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:01.520600  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.520673  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.521059  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:01.521088  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.521113  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:01.521135  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:01.521244  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:01.521341  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:01.521434  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:01.521525  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:01.521608  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:01.521700  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:01.521757  400750 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa Username:docker}
	I0929 12:11:01.521873  400750 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa Username:docker}
	I0929 12:11:01.598612  400750 ssh_runner.go:195] Run: systemctl --version
	I0929 12:11:01.639520  400750 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 12:11:01.787387  400750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 12:11:01.794315  400750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 12:11:01.794381  400750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:11:01.814258  400750 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 12:11:01.814294  400750 start.go:495] detecting cgroup driver to use...
	I0929 12:11:01.814377  400750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:11:01.832734  400750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:11:01.849689  400750 docker.go:218] disabling cri-docker service (if available) ...
	I0929 12:11:01.849771  400750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 12:11:01.866690  400750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 12:11:01.882932  400750 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 12:11:02.029549  400750 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 12:11:02.250111  400750 docker.go:234] disabling docker service ...
	I0929 12:11:02.250193  400750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 12:11:02.266636  400750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 12:11:02.282094  400750 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 12:11:02.441473  400750 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 12:11:02.586380  400750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:11:02.601529  400750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:11:02.623989  400750 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0929 12:11:02.624058  400750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:11:02.636698  400750 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 12:11:02.636772  400750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:11:02.649020  400750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:11:02.661276  400750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:11:02.673760  400750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:11:02.686789  400750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:11:02.698745  400750 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:11:02.718359  400750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:11:02.730471  400750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:11:02.740900  400750 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 12:11:02.740995  400750 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 12:11:02.759785  400750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:11:02.771507  400750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:11:02.915340  400750 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 12:11:03.025871  400750 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 12:11:03.025956  400750 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 12:11:03.031237  400750 start.go:563] Will wait 60s for crictl version
	I0929 12:11:03.031306  400750 ssh_runner.go:195] Run: which crictl
	I0929 12:11:03.035268  400750 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:11:03.079606  400750 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 12:11:03.079684  400750 ssh_runner.go:195] Run: crio --version
	I0929 12:11:03.108145  400750 ssh_runner.go:195] Run: crio --version
	I0929 12:11:03.139018  400750 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0929 12:11:03.140319  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetIP
	I0929 12:11:03.143320  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:03.143757  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:03.143794  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:03.144057  400750 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0929 12:11:03.148527  400750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:11:03.163459  400750 kubeadm.go:875] updating cluster {Name:test-preload-547438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test
-preload-547438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:11:03.163580  400750 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0929 12:11:03.163629  400750 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:11:03.205046  400750 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0929 12:11:03.205127  400750 ssh_runner.go:195] Run: which lz4
	I0929 12:11:03.209519  400750 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 12:11:03.214691  400750 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 12:11:03.214735  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0929 12:11:04.660436  400750 crio.go:462] duration metric: took 1.450957829s to copy over tarball
	I0929 12:11:04.660537  400750 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 12:11:06.323752  400750 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.663168801s)
	I0929 12:11:06.323790  400750 crio.go:469] duration metric: took 1.663320296s to extract the tarball
	I0929 12:11:06.323799  400750 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 12:11:06.364546  400750 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:11:06.404133  400750 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:11:06.404158  400750 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:11:06.404166  400750 kubeadm.go:926] updating node { 192.168.39.143 8443 v1.32.0 crio true true} ...
	I0929 12:11:06.404333  400750 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-547438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-547438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:11:06.404440  400750 ssh_runner.go:195] Run: crio config
	I0929 12:11:06.448560  400750 cni.go:84] Creating CNI manager for ""
	I0929 12:11:06.448585  400750 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:11:06.448600  400750 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:11:06.448629  400750 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-547438 NodeName:test-preload-547438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:11:06.448778  400750 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-547438"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.143"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:11:06.448876  400750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0929 12:11:06.460419  400750 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:11:06.460498  400750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:11:06.471878  400750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0929 12:11:06.491289  400750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:11:06.510587  400750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0929 12:11:06.530245  400750 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I0929 12:11:06.534192  400750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:11:06.547985  400750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:11:06.684861  400750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:11:06.709124  400750 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438 for IP: 192.168.39.143
	I0929 12:11:06.709155  400750 certs.go:194] generating shared ca certs ...
	I0929 12:11:06.709178  400750 certs.go:226] acquiring lock for ca certs: {Name:mk0b410c7c5424a4463d6cf6464227ce4eef65e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:11:06.709361  400750 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key
	I0929 12:11:06.709430  400750 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key
	I0929 12:11:06.709446  400750 certs.go:256] generating profile certs ...
	I0929 12:11:06.709552  400750 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/client.key
	I0929 12:11:06.709634  400750 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/apiserver.key.bc72b00e
	I0929 12:11:06.709698  400750 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/proxy-client.key
	I0929 12:11:06.709840  400750 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem (1338 bytes)
	W0929 12:11:06.709882  400750 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423_empty.pem, impossibly tiny 0 bytes
	I0929 12:11:06.709897  400750 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:11:06.709926  400750 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem (1078 bytes)
	I0929 12:11:06.709958  400750 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:11:06.710008  400750 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem (1675 bytes)
	I0929 12:11:06.710062  400750 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem (1708 bytes)
	I0929 12:11:06.710885  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:11:06.744631  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 12:11:06.774809  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:11:06.808167  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 12:11:06.835435  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0929 12:11:06.863574  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:11:06.892098  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:11:06.920633  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:11:06.949469  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem --> /usr/share/ca-certificates/369423.pem (1338 bytes)
	I0929 12:11:06.979299  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem --> /usr/share/ca-certificates/3694232.pem (1708 bytes)
	I0929 12:11:07.007808  400750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:11:07.036842  400750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:11:07.056850  400750 ssh_runner.go:195] Run: openssl version
	I0929 12:11:07.063057  400750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3694232.pem && ln -fs /usr/share/ca-certificates/3694232.pem /etc/ssl/certs/3694232.pem"
	I0929 12:11:07.075662  400750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3694232.pem
	I0929 12:11:07.080824  400750 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:26 /usr/share/ca-certificates/3694232.pem
	I0929 12:11:07.080880  400750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3694232.pem
	I0929 12:11:07.088370  400750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3694232.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:11:07.101219  400750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:11:07.114027  400750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:11:07.119351  400750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:11:07.119410  400750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:11:07.126470  400750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:11:07.139037  400750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/369423.pem && ln -fs /usr/share/ca-certificates/369423.pem /etc/ssl/certs/369423.pem"
	I0929 12:11:07.151683  400750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/369423.pem
	I0929 12:11:07.156855  400750 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:26 /usr/share/ca-certificates/369423.pem
	I0929 12:11:07.156922  400750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/369423.pem
	I0929 12:11:07.164063  400750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/369423.pem /etc/ssl/certs/51391683.0"
	I0929 12:11:07.177018  400750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:11:07.182277  400750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:11:07.189672  400750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:11:07.196860  400750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:11:07.203958  400750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:11:07.211121  400750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:11:07.218436  400750 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:11:07.225374  400750 kubeadm.go:392] StartCluster: {Name:test-preload-547438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-pr
eload-547438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:11:07.225469  400750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 12:11:07.225549  400750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:11:07.265269  400750 cri.go:89] found id: ""
	I0929 12:11:07.265350  400750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:11:07.277867  400750 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:11:07.277891  400750 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:11:07.277946  400750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:11:07.289805  400750 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:11:07.290338  400750 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-547438" does not appear in /home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:11:07.290477  400750 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-365455/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-547438" cluster setting kubeconfig missing "test-preload-547438" context setting]
	I0929 12:11:07.290835  400750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/kubeconfig: {Name:mkd302531ec3362506563544f43831c9980ac365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:11:07.291523  400750 kapi.go:59] client config for test-preload-547438: &rest.Config{Host:"https://192.168.39.143:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/client.crt", KeyFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/client.key", CAFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 12:11:07.292081  400750 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0929 12:11:07.292102  400750 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0929 12:11:07.292111  400750 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0929 12:11:07.292118  400750 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0929 12:11:07.292127  400750 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0929 12:11:07.292589  400750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:11:07.303981  400750 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.143
	I0929 12:11:07.304022  400750 kubeadm.go:1152] stopping kube-system containers ...
	I0929 12:11:07.304035  400750 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0929 12:11:07.304090  400750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:11:07.342935  400750 cri.go:89] found id: ""
	I0929 12:11:07.343023  400750 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0929 12:11:07.361600  400750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:11:07.373176  400750 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 12:11:07.373198  400750 kubeadm.go:157] found existing configuration files:
	
	I0929 12:11:07.373249  400750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 12:11:07.384039  400750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 12:11:07.384114  400750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 12:11:07.395587  400750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 12:11:07.406100  400750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 12:11:07.406158  400750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:11:07.417193  400750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 12:11:07.427390  400750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 12:11:07.427448  400750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:11:07.438104  400750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 12:11:07.448453  400750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 12:11:07.448512  400750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:11:07.459303  400750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:11:07.470487  400750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:11:07.525752  400750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:11:08.439020  400750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:11:08.692367  400750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:11:08.753394  400750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:11:08.840654  400750 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:11:08.840754  400750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:11:09.341544  400750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:11:09.841496  400750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:11:10.341493  400750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:11:10.841287  400750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:11:11.341117  400750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:11:11.366548  400750 api_server.go:72] duration metric: took 2.52589223s to wait for apiserver process to appear ...
	I0929 12:11:11.366588  400750 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:11:11.366616  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:11.367195  400750 api_server.go:269] stopped: https://192.168.39.143:8443/healthz: Get "https://192.168.39.143:8443/healthz": dial tcp 192.168.39.143:8443: connect: connection refused
	I0929 12:11:11.866875  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:14.175617  400750 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 12:11:14.175662  400750 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 12:11:14.175684  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:14.181920  400750 api_server.go:279] https://192.168.39.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0929 12:11:14.181954  400750 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0929 12:11:14.367207  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:14.373048  400750 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:11:14.373082  400750 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:11:14.866704  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:14.875050  400750 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:11:14.875091  400750 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:11:15.366947  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:15.375649  400750 api_server.go:279] https://192.168.39.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:11:15.375684  400750 api_server.go:103] status: https://192.168.39.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:11:15.866947  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:15.871395  400750 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I0929 12:11:15.878421  400750 api_server.go:141] control plane version: v1.32.0
	I0929 12:11:15.878447  400750 api_server.go:131] duration metric: took 4.511850579s to wait for apiserver health ...
	I0929 12:11:15.878456  400750 cni.go:84] Creating CNI manager for ""
	I0929 12:11:15.878463  400750 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:11:15.880020  400750 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 12:11:15.881100  400750 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 12:11:15.893823  400750 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 12:11:15.928403  400750 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:11:15.939371  400750 system_pods.go:59] 7 kube-system pods found
	I0929 12:11:15.939423  400750 system_pods.go:61] "coredns-668d6bf9bc-k5kwv" [9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:11:15.939434  400750 system_pods.go:61] "etcd-test-preload-547438" [d79f04de-f6ab-4e8a-85db-d55458799546] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:11:15.939454  400750 system_pods.go:61] "kube-apiserver-test-preload-547438" [9595db86-04d0-41c9-8445-47d15cdcabe5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:11:15.939466  400750 system_pods.go:61] "kube-controller-manager-test-preload-547438" [99cf4c5f-8f66-4357-91aa-82a768422eba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:11:15.939476  400750 system_pods.go:61] "kube-proxy-f429m" [9d0d823b-ec7d-4696-8d96-9778671de9e7] Running
	I0929 12:11:15.939488  400750 system_pods.go:61] "kube-scheduler-test-preload-547438" [68c02c2f-59cf-4ed9-9c60-5af20e07fe0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:11:15.939496  400750 system_pods.go:61] "storage-provisioner" [1e6c991c-05a6-4983-86c3-d04c9cacf015] Running
	I0929 12:11:15.939507  400750 system_pods.go:74] duration metric: took 11.070892ms to wait for pod list to return data ...
	I0929 12:11:15.939520  400750 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:11:15.944351  400750 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 12:11:15.944378  400750 node_conditions.go:123] node cpu capacity is 2
	I0929 12:11:15.944396  400750 node_conditions.go:105] duration metric: took 4.866439ms to run NodePressure ...
	I0929 12:11:15.944424  400750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0929 12:11:16.206430  400750 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0929 12:11:16.211998  400750 kubeadm.go:735] kubelet initialised
	I0929 12:11:16.212026  400750 kubeadm.go:736] duration metric: took 5.562024ms waiting for restarted kubelet to initialise ...
	I0929 12:11:16.212045  400750 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:11:16.233222  400750 ops.go:34] apiserver oom_adj: -16
	I0929 12:11:16.233247  400750 kubeadm.go:593] duration metric: took 8.955349956s to restartPrimaryControlPlane
	I0929 12:11:16.233256  400750 kubeadm.go:394] duration metric: took 9.007889994s to StartCluster
	I0929 12:11:16.233275  400750 settings.go:142] acquiring lock: {Name:mk1143e9344364f35458338f5354c9162487b91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:11:16.233378  400750 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:11:16.234132  400750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/kubeconfig: {Name:mkd302531ec3362506563544f43831c9980ac365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:11:16.234409  400750 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:11:16.234537  400750 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:11:16.234637  400750 addons.go:69] Setting storage-provisioner=true in profile "test-preload-547438"
	I0929 12:11:16.234670  400750 addons.go:238] Setting addon storage-provisioner=true in "test-preload-547438"
	W0929 12:11:16.234683  400750 addons.go:247] addon storage-provisioner should already be in state true
	I0929 12:11:16.234679  400750 config.go:182] Loaded profile config "test-preload-547438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0929 12:11:16.234714  400750 host.go:66] Checking if "test-preload-547438" exists ...
	I0929 12:11:16.234717  400750 addons.go:69] Setting default-storageclass=true in profile "test-preload-547438"
	I0929 12:11:16.234756  400750 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-547438"
	I0929 12:11:16.235215  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:11:16.235269  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:11:16.235217  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:11:16.235369  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:11:16.236019  400750 out.go:179] * Verifying Kubernetes components...
	I0929 12:11:16.237568  400750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:11:16.249098  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0929 12:11:16.249422  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36593
	I0929 12:11:16.249707  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:11:16.250028  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:11:16.250306  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:11:16.250333  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:11:16.250570  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:11:16.250593  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:11:16.250707  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:11:16.250956  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:11:16.251192  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetState
	I0929 12:11:16.251283  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:11:16.251332  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:11:16.253659  400750 kapi.go:59] client config for test-preload-547438: &rest.Config{Host:"https://192.168.39.143:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/client.crt", KeyFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/client.key", CAFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 12:11:16.254076  400750 addons.go:238] Setting addon default-storageclass=true in "test-preload-547438"
	W0929 12:11:16.254103  400750 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:11:16.254134  400750 host.go:66] Checking if "test-preload-547438" exists ...
	I0929 12:11:16.254494  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:11:16.254542  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:11:16.265355  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0929 12:11:16.265849  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:11:16.266389  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:11:16.266423  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:11:16.266869  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:11:16.267116  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetState
	I0929 12:11:16.267742  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0929 12:11:16.268142  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:11:16.268688  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:11:16.268707  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:11:16.269112  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:11:16.269510  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:16.269861  400750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:11:16.269913  400750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:11:16.271337  400750 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:11:16.272919  400750 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:11:16.272944  400750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:11:16.272967  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:16.277164  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:16.277815  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:16.277841  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:16.278177  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:16.278407  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:16.278589  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:16.278748  400750 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa Username:docker}
	I0929 12:11:16.284337  400750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I0929 12:11:16.284824  400750 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:11:16.285308  400750 main.go:141] libmachine: Using API Version  1
	I0929 12:11:16.285329  400750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:11:16.285657  400750 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:11:16.285895  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetState
	I0929 12:11:16.287823  400750 main.go:141] libmachine: (test-preload-547438) Calling .DriverName
	I0929 12:11:16.288097  400750 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:11:16.288119  400750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:11:16.288138  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHHostname
	I0929 12:11:16.291864  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:16.292413  400750 main.go:141] libmachine: (test-preload-547438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:8e:81", ip: ""} in network mk-test-preload-547438: {Iface:virbr1 ExpiryTime:2025-09-29 13:10:56 +0000 UTC Type:0 Mac:52:54:00:a8:8e:81 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:test-preload-547438 Clientid:01:52:54:00:a8:8e:81}
	I0929 12:11:16.292446  400750 main.go:141] libmachine: (test-preload-547438) DBG | domain test-preload-547438 has defined IP address 192.168.39.143 and MAC address 52:54:00:a8:8e:81 in network mk-test-preload-547438
	I0929 12:11:16.292655  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHPort
	I0929 12:11:16.292859  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHKeyPath
	I0929 12:11:16.293046  400750 main.go:141] libmachine: (test-preload-547438) Calling .GetSSHUsername
	I0929 12:11:16.293223  400750 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/test-preload-547438/id_rsa Username:docker}
	I0929 12:11:16.479286  400750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:11:16.505099  400750 node_ready.go:35] waiting up to 6m0s for node "test-preload-547438" to be "Ready" ...
	I0929 12:11:16.614073  400750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:11:16.622414  400750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:11:17.300179  400750 main.go:141] libmachine: Making call to close driver server
	I0929 12:11:17.300209  400750 main.go:141] libmachine: (test-preload-547438) Calling .Close
	I0929 12:11:17.300274  400750 main.go:141] libmachine: Making call to close driver server
	I0929 12:11:17.300298  400750 main.go:141] libmachine: (test-preload-547438) Calling .Close
	I0929 12:11:17.300541  400750 main.go:141] libmachine: (test-preload-547438) DBG | Closing plugin on server side
	I0929 12:11:17.300598  400750 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:11:17.300607  400750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:11:17.300615  400750 main.go:141] libmachine: Making call to close driver server
	I0929 12:11:17.300622  400750 main.go:141] libmachine: (test-preload-547438) Calling .Close
	I0929 12:11:17.300624  400750 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:11:17.300649  400750 main.go:141] libmachine: (test-preload-547438) DBG | Closing plugin on server side
	I0929 12:11:17.300740  400750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:11:17.300781  400750 main.go:141] libmachine: Making call to close driver server
	I0929 12:11:17.300800  400750 main.go:141] libmachine: (test-preload-547438) Calling .Close
	I0929 12:11:17.300828  400750 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:11:17.300867  400750 main.go:141] libmachine: (test-preload-547438) DBG | Closing plugin on server side
	I0929 12:11:17.300872  400750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:11:17.301080  400750 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:11:17.301093  400750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:11:17.308892  400750 main.go:141] libmachine: Making call to close driver server
	I0929 12:11:17.308918  400750 main.go:141] libmachine: (test-preload-547438) Calling .Close
	I0929 12:11:17.309187  400750 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:11:17.309206  400750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:11:17.310757  400750 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 12:11:17.311670  400750 addons.go:514] duration metric: took 1.077157314s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0929 12:11:18.509614  400750 node_ready.go:57] node "test-preload-547438" has "Ready":"False" status (will retry)
	W0929 12:11:21.008713  400750 node_ready.go:57] node "test-preload-547438" has "Ready":"False" status (will retry)
	W0929 12:11:23.008839  400750 node_ready.go:57] node "test-preload-547438" has "Ready":"False" status (will retry)
	I0929 12:11:24.508395  400750 node_ready.go:49] node "test-preload-547438" is "Ready"
	I0929 12:11:24.508427  400750 node_ready.go:38] duration metric: took 8.003274963s for node "test-preload-547438" to be "Ready" ...
	I0929 12:11:24.508441  400750 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:11:24.508499  400750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:11:24.529957  400750 api_server.go:72] duration metric: took 8.295504506s to wait for apiserver process to appear ...
	I0929 12:11:24.530014  400750 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:11:24.530040  400750 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I0929 12:11:24.535685  400750 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I0929 12:11:24.538035  400750 api_server.go:141] control plane version: v1.32.0
	I0929 12:11:24.538058  400750 api_server.go:131] duration metric: took 8.0366ms to wait for apiserver health ...
	I0929 12:11:24.538067  400750 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:11:24.541670  400750 system_pods.go:59] 7 kube-system pods found
	I0929 12:11:24.541696  400750 system_pods.go:61] "coredns-668d6bf9bc-k5kwv" [9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78] Running
	I0929 12:11:24.541701  400750 system_pods.go:61] "etcd-test-preload-547438" [d79f04de-f6ab-4e8a-85db-d55458799546] Running
	I0929 12:11:24.541708  400750 system_pods.go:61] "kube-apiserver-test-preload-547438" [9595db86-04d0-41c9-8445-47d15cdcabe5] Running
	I0929 12:11:24.541715  400750 system_pods.go:61] "kube-controller-manager-test-preload-547438" [99cf4c5f-8f66-4357-91aa-82a768422eba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:11:24.541719  400750 system_pods.go:61] "kube-proxy-f429m" [9d0d823b-ec7d-4696-8d96-9778671de9e7] Running
	I0929 12:11:24.541724  400750 system_pods.go:61] "kube-scheduler-test-preload-547438" [68c02c2f-59cf-4ed9-9c60-5af20e07fe0c] Running
	I0929 12:11:24.541728  400750 system_pods.go:61] "storage-provisioner" [1e6c991c-05a6-4983-86c3-d04c9cacf015] Running
	I0929 12:11:24.541733  400750 system_pods.go:74] duration metric: took 3.660839ms to wait for pod list to return data ...
	I0929 12:11:24.541741  400750 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:11:24.544288  400750 default_sa.go:45] found service account: "default"
	I0929 12:11:24.544308  400750 default_sa.go:55] duration metric: took 2.562487ms for default service account to be created ...
	I0929 12:11:24.544316  400750 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:11:24.547275  400750 system_pods.go:86] 7 kube-system pods found
	I0929 12:11:24.547301  400750 system_pods.go:89] "coredns-668d6bf9bc-k5kwv" [9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78] Running
	I0929 12:11:24.547306  400750 system_pods.go:89] "etcd-test-preload-547438" [d79f04de-f6ab-4e8a-85db-d55458799546] Running
	I0929 12:11:24.547310  400750 system_pods.go:89] "kube-apiserver-test-preload-547438" [9595db86-04d0-41c9-8445-47d15cdcabe5] Running
	I0929 12:11:24.547317  400750 system_pods.go:89] "kube-controller-manager-test-preload-547438" [99cf4c5f-8f66-4357-91aa-82a768422eba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:11:24.547325  400750 system_pods.go:89] "kube-proxy-f429m" [9d0d823b-ec7d-4696-8d96-9778671de9e7] Running
	I0929 12:11:24.547332  400750 system_pods.go:89] "kube-scheduler-test-preload-547438" [68c02c2f-59cf-4ed9-9c60-5af20e07fe0c] Running
	I0929 12:11:24.547335  400750 system_pods.go:89] "storage-provisioner" [1e6c991c-05a6-4983-86c3-d04c9cacf015] Running
	I0929 12:11:24.547341  400750 system_pods.go:126] duration metric: took 3.021262ms to wait for k8s-apps to be running ...
	I0929 12:11:24.547346  400750 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:11:24.547391  400750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:11:24.566130  400750 system_svc.go:56] duration metric: took 18.771072ms WaitForService to wait for kubelet
	I0929 12:11:24.566167  400750 kubeadm.go:578] duration metric: took 8.331726882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:11:24.566193  400750 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:11:24.569749  400750 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 12:11:24.569770  400750 node_conditions.go:123] node cpu capacity is 2
	I0929 12:11:24.569782  400750 node_conditions.go:105] duration metric: took 3.583258ms to run NodePressure ...
	I0929 12:11:24.569793  400750 start.go:241] waiting for startup goroutines ...
	I0929 12:11:24.569800  400750 start.go:246] waiting for cluster config update ...
	I0929 12:11:24.569810  400750 start.go:255] writing updated cluster config ...
	I0929 12:11:24.570129  400750 ssh_runner.go:195] Run: rm -f paused
	I0929 12:11:24.575967  400750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:11:24.576532  400750 kapi.go:59] client config for test-preload-547438: &rest.Config{Host:"https://192.168.39.143:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/client.crt", KeyFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/profiles/test-preload-547438/client.key", CAFile:"/home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 12:11:24.641544  400750 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-k5kwv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:24.646271  400750 pod_ready.go:94] pod "coredns-668d6bf9bc-k5kwv" is "Ready"
	I0929 12:11:24.646299  400750 pod_ready.go:86] duration metric: took 4.721045ms for pod "coredns-668d6bf9bc-k5kwv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:24.648424  400750 pod_ready.go:83] waiting for pod "etcd-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:24.652095  400750 pod_ready.go:94] pod "etcd-test-preload-547438" is "Ready"
	I0929 12:11:24.652115  400750 pod_ready.go:86] duration metric: took 3.662432ms for pod "etcd-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:24.653875  400750 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:24.657868  400750 pod_ready.go:94] pod "kube-apiserver-test-preload-547438" is "Ready"
	I0929 12:11:24.657905  400750 pod_ready.go:86] duration metric: took 4.011455ms for pod "kube-apiserver-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:24.660345  400750 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:11:26.665854  400750 pod_ready.go:104] pod "kube-controller-manager-test-preload-547438" is not "Ready", error: <nil>
	I0929 12:11:27.666790  400750 pod_ready.go:94] pod "kube-controller-manager-test-preload-547438" is "Ready"
	I0929 12:11:27.666826  400750 pod_ready.go:86] duration metric: took 3.006451589s for pod "kube-controller-manager-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:27.668999  400750 pod_ready.go:83] waiting for pod "kube-proxy-f429m" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:27.979734  400750 pod_ready.go:94] pod "kube-proxy-f429m" is "Ready"
	I0929 12:11:27.979772  400750 pod_ready.go:86] duration metric: took 310.746806ms for pod "kube-proxy-f429m" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:28.180156  400750 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:28.580643  400750 pod_ready.go:94] pod "kube-scheduler-test-preload-547438" is "Ready"
	I0929 12:11:28.580677  400750 pod_ready.go:86] duration metric: took 400.482787ms for pod "kube-scheduler-test-preload-547438" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:11:28.580692  400750 pod_ready.go:40] duration metric: took 4.004664457s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:11:28.625547  400750 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I0929 12:11:28.626959  400750 out.go:203] 
	W0929 12:11:28.628246  400750 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I0929 12:11:28.629541  400750 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I0929 12:11:28.630813  400750 out.go:179] * Done! kubectl is now configured to use "test-preload-547438" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.522236412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147889522217468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7067b5ca-965a-40b9-9d14-baa6e66f60d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.523104914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2abbc0a-0dc7-4fbe-9291-6872d06761b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.523233632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2abbc0a-0dc7-4fbe-9291-6872d06761b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.523679611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0158877c67785bce9552ca8aa6517cc8ac7e29e67483301623bcc292a1ced793,PodSandboxId:1032816e06afb6fb6dcbfc05736e0cad88def6e2de82beca268f3724333cdb80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759147882799495922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-k5kwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24089c97b43ba7fbbc8ad841aa55cc5966fd99f1d68eddb6bc0bfa32a6c725d1,PodSandboxId:f39d1bf8735bf0393f632e0a80f9ce1cd4ee5d54fb598cb3bc397dbe45138374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759147875327717066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1e6c991c-05a6-4983-86c3-d04c9cacf015,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290805d7d7ddd86136c3c8dc8001bc4b26d84047e9f84082ef7c20e4ffbb42a1,PodSandboxId:14c5fed75a1a871c9966036751528dc0cb7539c13f03c6d595867af6da4aa2b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759147875206065642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f429m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
0d823b-ec7d-4696-8d96-9778671de9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0467a8e7dffd3c0ab053461ee8664c23372c4bf6c116d47fe2153db80e53de5,PodSandboxId:23b224fb379bc7ff7f92dbae5b43f131efc0fac8ee054983e195f97fa6db1d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759147870968482313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a16a48f
8e574dbd1d9d2a0334cfad2,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713df631341fd37a6d83fbefeccb80787400e6b7ce463397e440f0dc6fb67fb0,PodSandboxId:47867cd7f960657a6143722f18193a8daffab070305c4bb6b2c2cf645c8ecfe8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759147870943795985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ec88c19a0a0ef5390c
2d37c32617df,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850f91c1ed33f2a3f0fa6566f9078474d945f668854e2a7fcc4b97e9145efb65,PodSandboxId:da27d099ebea3350923680122a19adec49075d6fcf89d71ef99761479892a16e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759147870945533852,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa325e13ca9f97a3c0f8dd0c41fee90,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50273e9263dd339dfa240f5b983d8979920b9c244c7b425a20ac3cbc0a2cd1f,PodSandboxId:9ab9bfa53941147ffe0246cc10bef792495c66d977ffbe4ce17151d16e56a2a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759147870900795597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 906632480cd819002065fd5c3aa2a1c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2abbc0a-0dc7-4fbe-9291-6872d06761b0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.560416392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a15325e-2a53-4a3f-b0ac-d94f28f9e5cc name=/runtime.v1.RuntimeService/Version
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.560527973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a15325e-2a53-4a3f-b0ac-d94f28f9e5cc name=/runtime.v1.RuntimeService/Version
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.561992441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd2cc8b0-68e9-4732-8d35-d20e2ea9e397 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.562533858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147889562511374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd2cc8b0-68e9-4732-8d35-d20e2ea9e397 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.563180116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44a356ce-59e7-4ff3-8b33-4a7115545126 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.563225973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44a356ce-59e7-4ff3-8b33-4a7115545126 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.563437763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0158877c67785bce9552ca8aa6517cc8ac7e29e67483301623bcc292a1ced793,PodSandboxId:1032816e06afb6fb6dcbfc05736e0cad88def6e2de82beca268f3724333cdb80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759147882799495922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-k5kwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24089c97b43ba7fbbc8ad841aa55cc5966fd99f1d68eddb6bc0bfa32a6c725d1,PodSandboxId:f39d1bf8735bf0393f632e0a80f9ce1cd4ee5d54fb598cb3bc397dbe45138374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759147875327717066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1e6c991c-05a6-4983-86c3-d04c9cacf015,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290805d7d7ddd86136c3c8dc8001bc4b26d84047e9f84082ef7c20e4ffbb42a1,PodSandboxId:14c5fed75a1a871c9966036751528dc0cb7539c13f03c6d595867af6da4aa2b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759147875206065642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f429m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
0d823b-ec7d-4696-8d96-9778671de9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0467a8e7dffd3c0ab053461ee8664c23372c4bf6c116d47fe2153db80e53de5,PodSandboxId:23b224fb379bc7ff7f92dbae5b43f131efc0fac8ee054983e195f97fa6db1d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759147870968482313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a16a48f
8e574dbd1d9d2a0334cfad2,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713df631341fd37a6d83fbefeccb80787400e6b7ce463397e440f0dc6fb67fb0,PodSandboxId:47867cd7f960657a6143722f18193a8daffab070305c4bb6b2c2cf645c8ecfe8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759147870943795985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ec88c19a0a0ef5390c
2d37c32617df,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850f91c1ed33f2a3f0fa6566f9078474d945f668854e2a7fcc4b97e9145efb65,PodSandboxId:da27d099ebea3350923680122a19adec49075d6fcf89d71ef99761479892a16e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759147870945533852,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa325e13ca9f97a3c0f8dd0c41fee90,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50273e9263dd339dfa240f5b983d8979920b9c244c7b425a20ac3cbc0a2cd1f,PodSandboxId:9ab9bfa53941147ffe0246cc10bef792495c66d977ffbe4ce17151d16e56a2a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759147870900795597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 906632480cd819002065fd5c3aa2a1c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44a356ce-59e7-4ff3-8b33-4a7115545126 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.601062752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7511c2c8-df0d-47e8-9d96-c6a2a0f500df name=/runtime.v1.RuntimeService/Version
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.601147544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7511c2c8-df0d-47e8-9d96-c6a2a0f500df name=/runtime.v1.RuntimeService/Version
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.602408203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1092548e-cb75-49a3-859f-de1f38c56802 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.602827236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147889602804888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1092548e-cb75-49a3-859f-de1f38c56802 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.603497040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=835d966e-4c62-4817-a1b0-530d5cbb8c77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.603576160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=835d966e-4c62-4817-a1b0-530d5cbb8c77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.603739950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0158877c67785bce9552ca8aa6517cc8ac7e29e67483301623bcc292a1ced793,PodSandboxId:1032816e06afb6fb6dcbfc05736e0cad88def6e2de82beca268f3724333cdb80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759147882799495922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-k5kwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24089c97b43ba7fbbc8ad841aa55cc5966fd99f1d68eddb6bc0bfa32a6c725d1,PodSandboxId:f39d1bf8735bf0393f632e0a80f9ce1cd4ee5d54fb598cb3bc397dbe45138374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759147875327717066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1e6c991c-05a6-4983-86c3-d04c9cacf015,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290805d7d7ddd86136c3c8dc8001bc4b26d84047e9f84082ef7c20e4ffbb42a1,PodSandboxId:14c5fed75a1a871c9966036751528dc0cb7539c13f03c6d595867af6da4aa2b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759147875206065642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f429m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
0d823b-ec7d-4696-8d96-9778671de9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0467a8e7dffd3c0ab053461ee8664c23372c4bf6c116d47fe2153db80e53de5,PodSandboxId:23b224fb379bc7ff7f92dbae5b43f131efc0fac8ee054983e195f97fa6db1d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759147870968482313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a16a48f
8e574dbd1d9d2a0334cfad2,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713df631341fd37a6d83fbefeccb80787400e6b7ce463397e440f0dc6fb67fb0,PodSandboxId:47867cd7f960657a6143722f18193a8daffab070305c4bb6b2c2cf645c8ecfe8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759147870943795985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ec88c19a0a0ef5390c
2d37c32617df,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850f91c1ed33f2a3f0fa6566f9078474d945f668854e2a7fcc4b97e9145efb65,PodSandboxId:da27d099ebea3350923680122a19adec49075d6fcf89d71ef99761479892a16e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759147870945533852,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa325e13ca9f97a3c0f8dd0c41fee90,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50273e9263dd339dfa240f5b983d8979920b9c244c7b425a20ac3cbc0a2cd1f,PodSandboxId:9ab9bfa53941147ffe0246cc10bef792495c66d977ffbe4ce17151d16e56a2a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759147870900795597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 906632480cd819002065fd5c3aa2a1c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=835d966e-4c62-4817-a1b0-530d5cbb8c77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.638860151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=527dd9bf-5cc6-4359-8321-79eef7d8ecc2 name=/runtime.v1.RuntimeService/Version
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.638947574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=527dd9bf-5cc6-4359-8321-79eef7d8ecc2 name=/runtime.v1.RuntimeService/Version
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.640826651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4d57a52-e30d-4b38-b9ad-6d93ebfc7838 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.641360433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147889641284974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4d57a52-e30d-4b38-b9ad-6d93ebfc7838 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.641881958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86655278-55c1-4ed3-84c2-4323463f5752 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.641941705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86655278-55c1-4ed3-84c2-4323463f5752 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:11:29 test-preload-547438 crio[827]: time="2025-09-29 12:11:29.643070393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0158877c67785bce9552ca8aa6517cc8ac7e29e67483301623bcc292a1ced793,PodSandboxId:1032816e06afb6fb6dcbfc05736e0cad88def6e2de82beca268f3724333cdb80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759147882799495922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-k5kwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24089c97b43ba7fbbc8ad841aa55cc5966fd99f1d68eddb6bc0bfa32a6c725d1,PodSandboxId:f39d1bf8735bf0393f632e0a80f9ce1cd4ee5d54fb598cb3bc397dbe45138374,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759147875327717066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 1e6c991c-05a6-4983-86c3-d04c9cacf015,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:290805d7d7ddd86136c3c8dc8001bc4b26d84047e9f84082ef7c20e4ffbb42a1,PodSandboxId:14c5fed75a1a871c9966036751528dc0cb7539c13f03c6d595867af6da4aa2b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759147875206065642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f429m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d
0d823b-ec7d-4696-8d96-9778671de9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0467a8e7dffd3c0ab053461ee8664c23372c4bf6c116d47fe2153db80e53de5,PodSandboxId:23b224fb379bc7ff7f92dbae5b43f131efc0fac8ee054983e195f97fa6db1d5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759147870968482313,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8a16a48f
8e574dbd1d9d2a0334cfad2,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713df631341fd37a6d83fbefeccb80787400e6b7ce463397e440f0dc6fb67fb0,PodSandboxId:47867cd7f960657a6143722f18193a8daffab070305c4bb6b2c2cf645c8ecfe8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759147870943795985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6ec88c19a0a0ef5390c
2d37c32617df,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850f91c1ed33f2a3f0fa6566f9078474d945f668854e2a7fcc4b97e9145efb65,PodSandboxId:da27d099ebea3350923680122a19adec49075d6fcf89d71ef99761479892a16e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759147870945533852,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaa325e13ca9f97a3c0f8dd0c41fee90,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50273e9263dd339dfa240f5b983d8979920b9c244c7b425a20ac3cbc0a2cd1f,PodSandboxId:9ab9bfa53941147ffe0246cc10bef792495c66d977ffbe4ce17151d16e56a2a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759147870900795597,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-547438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 906632480cd819002065fd5c3aa2a1c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86655278-55c1-4ed3-84c2-4323463f5752 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0158877c67785       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   6 seconds ago       Running             coredns                   1                   1032816e06afb       coredns-668d6bf9bc-k5kwv
	24089c97b43ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   f39d1bf8735bf       storage-provisioner
	290805d7d7ddd       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   14c5fed75a1a8       kube-proxy-f429m
	c0467a8e7dffd       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   23b224fb379bc       kube-scheduler-test-preload-547438
	850f91c1ed33f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   da27d099ebea3       etcd-test-preload-547438
	713df631341fd       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   47867cd7f9606       kube-apiserver-test-preload-547438
	a50273e9263dd       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   9ab9bfa539411       kube-controller-manager-test-preload-547438
	
	
	==> coredns [0158877c67785bce9552ca8aa6517cc8ac7e29e67483301623bcc292a1ced793] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35644 - 58332 "HINFO IN 7125454933547035065.2413895008174284861. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03351538s
	
	
	==> describe nodes <==
	Name:               test-preload-547438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-547438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=test-preload-547438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_10_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:10:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-547438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:11:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:11:24 +0000   Mon, 29 Sep 2025 12:10:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:11:24 +0000   Mon, 29 Sep 2025 12:10:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:11:24 +0000   Mon, 29 Sep 2025 12:10:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:11:24 +0000   Mon, 29 Sep 2025 12:11:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    test-preload-547438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 85119d9d5c7e490aa0aef2f497e9805b
	  System UUID:                85119d9d-5c7e-490a-a0ae-f2f497e9805b
	  Boot ID:                    a8010fb5-d7f5-4d27-8f7a-da67fb50ade4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-k5kwv                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     78s
	  kube-system                 etcd-test-preload-547438                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         83s
	  kube-system                 kube-apiserver-test-preload-547438             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-test-preload-547438    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-f429m                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-test-preload-547438             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node test-preload-547438 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node test-preload-547438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     89s (x7 over 89s)  kubelet          Node test-preload-547438 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     82s                kubelet          Node test-preload-547438 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node test-preload-547438 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                82s                kubelet          Node test-preload-547438 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node test-preload-547438 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           79s                node-controller  Node test-preload-547438 event: Registered Node test-preload-547438 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-547438 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-547438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-547438 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-547438 has been rebooted, boot id: a8010fb5-d7f5-4d27-8f7a-da67fb50ade4
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-547438 event: Registered Node test-preload-547438 in Controller
	
	
	==> dmesg <==
	[Sep29 12:10] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001454] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005764] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.984421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep29 12:11] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.098966] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.502293] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.298029] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [850f91c1ed33f2a3f0fa6566f9078474d945f668854e2a7fcc4b97e9145efb65] <==
	{"level":"info","ts":"2025-09-29T12:11:11.354032Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","added-peer-id":"be0eebdc09990bfd","added-peer-peer-urls":["https://192.168.39.143:2380"]}
	{"level":"info","ts":"2025-09-29T12:11:11.354138Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:11:11.354177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:11:11.356759Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T12:11:11.370287Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T12:11:11.370682Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"be0eebdc09990bfd","initial-advertise-peer-urls":["https://192.168.39.143:2380"],"listen-peer-urls":["https://192.168.39.143:2380"],"advertise-client-urls":["https://192.168.39.143:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.143:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T12:11:11.370730Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T12:11:11.370856Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.143:2380"}
	{"level":"info","ts":"2025-09-29T12:11:11.370899Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.143:2380"}
	{"level":"info","ts":"2025-09-29T12:11:13.026536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T12:11:13.026588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T12:11:13.026622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgPreVoteResp from be0eebdc09990bfd at term 2"}
	{"level":"info","ts":"2025-09-29T12:11:13.026641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T12:11:13.026647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgVoteResp from be0eebdc09990bfd at term 3"}
	{"level":"info","ts":"2025-09-29T12:11:13.026655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became leader at term 3"}
	{"level":"info","ts":"2025-09-29T12:11:13.026662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be0eebdc09990bfd elected leader be0eebdc09990bfd at term 3"}
	{"level":"info","ts":"2025-09-29T12:11:13.028765Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"be0eebdc09990bfd","local-member-attributes":"{Name:test-preload-547438 ClientURLs:[https://192.168.39.143:2379]}","request-path":"/0/members/be0eebdc09990bfd/attributes","cluster-id":"6857887556ef56db","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T12:11:13.028774Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:11:13.028794Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:11:13.029393Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T12:11:13.029411Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T12:11:13.029872Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T12:11:13.029872Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T12:11:13.030489Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T12:11:13.030946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.143:2379"}
	
	
	==> kernel <==
	 12:11:29 up 0 min,  0 users,  load average: 0.28, 0.08, 0.02
	Linux test-preload-547438 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [713df631341fd37a6d83fbefeccb80787400e6b7ce463397e440f0dc6fb67fb0] <==
	I0929 12:11:14.124036       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0929 12:11:14.126568       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0929 12:11:14.126816       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0929 12:11:14.173178       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0929 12:11:14.173215       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0929 12:11:14.173229       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0929 12:11:14.173247       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0929 12:11:14.173759       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0929 12:11:14.175047       1 shared_informer.go:320] Caches are synced for configmaps
	I0929 12:11:14.175586       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0929 12:11:14.187011       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 12:11:14.224252       1 cache.go:39] Caches are synced for autoregister controller
	I0929 12:11:14.224356       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0929 12:11:14.231780       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0929 12:11:14.231817       1 policy_source.go:240] refreshing policies
	I0929 12:11:14.302973       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0929 12:11:14.810442       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0929 12:11:15.083186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 12:11:16.006430       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0929 12:11:16.042139       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0929 12:11:16.071649       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 12:11:16.079519       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 12:11:17.535059       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 12:11:17.737685       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0929 12:11:17.836484       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a50273e9263dd339dfa240f5b983d8979920b9c244c7b425a20ac3cbc0a2cd1f] <==
	I0929 12:11:17.383184       1 shared_informer.go:320] Caches are synced for cronjob
	I0929 12:11:17.384742       1 shared_informer.go:320] Caches are synced for HPA
	I0929 12:11:17.384781       1 shared_informer.go:320] Caches are synced for taint
	I0929 12:11:17.384850       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:11:17.384888       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0929 12:11:17.384923       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-547438"
	I0929 12:11:17.384971       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0929 12:11:17.385487       1 shared_informer.go:320] Caches are synced for deployment
	I0929 12:11:17.386225       1 shared_informer.go:320] Caches are synced for job
	I0929 12:11:17.387592       1 shared_informer.go:320] Caches are synced for resource quota
	I0929 12:11:17.387626       1 shared_informer.go:320] Caches are synced for persistent volume
	I0929 12:11:17.390272       1 shared_informer.go:320] Caches are synced for endpoint
	I0929 12:11:17.392938       1 shared_informer.go:320] Caches are synced for GC
	I0929 12:11:17.396442       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0929 12:11:17.411847       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-547438"
	I0929 12:11:17.412772       1 shared_informer.go:320] Caches are synced for garbage collector
	I0929 12:11:17.745572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="406.826889ms"
	I0929 12:11:17.745673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.079µs"
	I0929 12:11:22.914437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.734µs"
	I0929 12:11:23.922898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.903949ms"
	I0929 12:11:23.923798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="60.371µs"
	I0929 12:11:23.928284       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0929 12:11:24.389076       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-547438"
	I0929 12:11:24.401354       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-547438"
	I0929 12:11:27.387431       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [290805d7d7ddd86136c3c8dc8001bc4b26d84047e9f84082ef7c20e4ffbb42a1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0929 12:11:15.574785       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0929 12:11:15.585251       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.143"]
	E0929 12:11:15.585382       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:11:15.617894       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0929 12:11:15.617938       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 12:11:15.617963       1 server_linux.go:170] "Using iptables Proxier"
	I0929 12:11:15.620641       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:11:15.620964       1 server.go:497] "Version info" version="v1.32.0"
	I0929 12:11:15.621045       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:11:15.622687       1 config.go:199] "Starting service config controller"
	I0929 12:11:15.622743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0929 12:11:15.622919       1 config.go:105] "Starting endpoint slice config controller"
	I0929 12:11:15.622946       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0929 12:11:15.623653       1 config.go:329] "Starting node config controller"
	I0929 12:11:15.623688       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0929 12:11:15.723581       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0929 12:11:15.723613       1 shared_informer.go:320] Caches are synced for service config
	I0929 12:11:15.723962       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c0467a8e7dffd3c0ab053461ee8664c23372c4bf6c116d47fe2153db80e53de5] <==
	I0929 12:11:12.147065       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:11:14.130069       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:11:14.130107       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:11:14.130117       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:11:14.130127       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:11:14.164121       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0929 12:11:14.165005       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:11:14.168884       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0929 12:11:14.169016       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:11:14.169033       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 12:11:14.170505       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:11:14.270829       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.316126    1149 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-547438"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.316231    1149 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-547438"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.316256    1149 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.317841    1149 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.319751    1149 setters.go:602] "Node became not ready" node="test-preload-547438" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-29T12:11:14Z","lastTransitionTime":"2025-09-29T12:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.745618    1149 apiserver.go:52] "Watching apiserver"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: E0929 12:11:14.750810    1149 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-k5kwv" podUID="9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.763113    1149 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.805188    1149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d0d823b-ec7d-4696-8d96-9778671de9e7-lib-modules\") pod \"kube-proxy-f429m\" (UID: \"9d0d823b-ec7d-4696-8d96-9778671de9e7\") " pod="kube-system/kube-proxy-f429m"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.805246    1149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e6c991c-05a6-4983-86c3-d04c9cacf015-tmp\") pod \"storage-provisioner\" (UID: \"1e6c991c-05a6-4983-86c3-d04c9cacf015\") " pod="kube-system/storage-provisioner"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: I0929 12:11:14.805281    1149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d0d823b-ec7d-4696-8d96-9778671de9e7-xtables-lock\") pod \"kube-proxy-f429m\" (UID: \"9d0d823b-ec7d-4696-8d96-9778671de9e7\") " pod="kube-system/kube-proxy-f429m"
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: E0929 12:11:14.806472    1149 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 12:11:14 test-preload-547438 kubelet[1149]: E0929 12:11:14.806553    1149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume podName:9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78 nodeName:}" failed. No retries permitted until 2025-09-29 12:11:15.306532848 +0000 UTC m=+6.656393788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume") pod "coredns-668d6bf9bc-k5kwv" (UID: "9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78") : object "kube-system"/"coredns" not registered
	Sep 29 12:11:15 test-preload-547438 kubelet[1149]: E0929 12:11:15.311164    1149 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 12:11:15 test-preload-547438 kubelet[1149]: E0929 12:11:15.311251    1149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume podName:9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78 nodeName:}" failed. No retries permitted until 2025-09-29 12:11:16.311229046 +0000 UTC m=+7.661089974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume") pod "coredns-668d6bf9bc-k5kwv" (UID: "9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78") : object "kube-system"/"coredns" not registered
	Sep 29 12:11:16 test-preload-547438 kubelet[1149]: E0929 12:11:16.318420    1149 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 12:11:16 test-preload-547438 kubelet[1149]: E0929 12:11:16.318860    1149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume podName:9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78 nodeName:}" failed. No retries permitted until 2025-09-29 12:11:18.318839964 +0000 UTC m=+9.668700904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume") pod "coredns-668d6bf9bc-k5kwv" (UID: "9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78") : object "kube-system"/"coredns" not registered
	Sep 29 12:11:16 test-preload-547438 kubelet[1149]: E0929 12:11:16.779858    1149 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-k5kwv" podUID="9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78"
	Sep 29 12:11:18 test-preload-547438 kubelet[1149]: E0929 12:11:18.333533    1149 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 29 12:11:18 test-preload-547438 kubelet[1149]: E0929 12:11:18.333597    1149 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume podName:9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78 nodeName:}" failed. No retries permitted until 2025-09-29 12:11:22.333584255 +0000 UTC m=+13.683445195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78-config-volume") pod "coredns-668d6bf9bc-k5kwv" (UID: "9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78") : object "kube-system"/"coredns" not registered
	Sep 29 12:11:18 test-preload-547438 kubelet[1149]: E0929 12:11:18.779994    1149 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-k5kwv" podUID="9c3d9f78-43d9-47b0-a5b2-5de5eb6abc78"
	Sep 29 12:11:18 test-preload-547438 kubelet[1149]: E0929 12:11:18.833960    1149 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147878833541330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 29 12:11:18 test-preload-547438 kubelet[1149]: E0929 12:11:18.834018    1149 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147878833541330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 29 12:11:28 test-preload-547438 kubelet[1149]: E0929 12:11:28.837036    1149 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147888836551147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 29 12:11:28 test-preload-547438 kubelet[1149]: E0929 12:11:28.837058    1149 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759147888836551147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [24089c97b43ba7fbbc8ad841aa55cc5966fd99f1d68eddb6bc0bfa32a6c725d1] <==
	I0929 12:11:15.516090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-547438 -n test-preload-547438
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-547438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-547438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-547438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-547438: (1.030837258s)
--- FAIL: TestPreload (133.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (361.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-448284 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-448284 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 80 (5m59.845326201s)

                                                
                                                
-- stdout --
	* [pause-448284] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-448284" primary control-plane node in "pause-448284" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:15:08.555450  405898 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:15:08.555711  405898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:15:08.555721  405898 out.go:374] Setting ErrFile to fd 2...
	I0929 12:15:08.555726  405898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:15:08.555990  405898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 12:15:08.556561  405898 out.go:368] Setting JSON to false
	I0929 12:15:08.557563  405898 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7051,"bootTime":1759141058,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:15:08.557631  405898 start.go:140] virtualization: kvm guest
	I0929 12:15:08.559505  405898 out.go:179] * [pause-448284] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:15:08.560902  405898 notify.go:220] Checking for updates...
	I0929 12:15:08.560949  405898 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:15:08.562263  405898 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:15:08.564173  405898 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:15:08.565277  405898 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 12:15:08.566458  405898 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:15:08.568327  405898 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:15:08.570317  405898 config.go:182] Loaded profile config "pause-448284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:15:08.570990  405898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:15:08.571084  405898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:15:08.586763  405898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0929 12:15:08.587446  405898 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:15:08.588408  405898 main.go:141] libmachine: Using API Version  1
	I0929 12:15:08.588444  405898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:15:08.588967  405898 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:15:08.589170  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:08.589450  405898 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:15:08.589768  405898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:15:08.589827  405898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:15:08.606548  405898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0929 12:15:08.607196  405898 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:15:08.607659  405898 main.go:141] libmachine: Using API Version  1
	I0929 12:15:08.607684  405898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:15:08.608068  405898 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:15:08.608377  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:08.646904  405898 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 12:15:08.648069  405898 start.go:304] selected driver: kvm2
	I0929 12:15:08.648091  405898 start.go:924] validating driver "kvm2" against &{Name:pause-448284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterN
ame:pause-448284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-de
vice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:15:08.648299  405898 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:15:08.648844  405898 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:15:08.648942  405898 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 12:15:08.667526  405898 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 12:15:08.667588  405898 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 12:15:08.683064  405898 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 12:15:08.683877  405898 cni.go:84] Creating CNI manager for ""
	I0929 12:15:08.683944  405898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:15:08.684028  405898 start.go:348] cluster config:
	{Name:pause-448284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-448284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry
:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:15:08.684206  405898 iso.go:125] acquiring lock: {Name:mkf6a4bd1628698e7eb4c42d44aa8328e64686d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:15:08.688131  405898 out.go:179] * Starting "pause-448284" primary control-plane node in "pause-448284" cluster
	I0929 12:15:08.689402  405898 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:15:08.689459  405898 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 12:15:08.689478  405898 cache.go:58] Caching tarball of preloaded images
	I0929 12:15:08.689620  405898 preload.go:172] Found /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 12:15:08.689638  405898 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 12:15:08.689826  405898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/config.json ...
	I0929 12:15:08.690080  405898 start.go:360] acquireMachinesLock for pause-448284: {Name:mk02e688f69f8dfa335651bd732d9d18b60c0952 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 12:15:15.709081  405898 start.go:364] duration metric: took 7.018955837s to acquireMachinesLock for "pause-448284"
	I0929 12:15:15.709127  405898 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:15:15.709140  405898 fix.go:54] fixHost starting: 
	I0929 12:15:15.709630  405898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:15:15.709689  405898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:15:15.727678  405898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0929 12:15:15.728324  405898 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:15:15.728946  405898 main.go:141] libmachine: Using API Version  1
	I0929 12:15:15.729003  405898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:15:15.729399  405898 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:15:15.729682  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:15.729871  405898 main.go:141] libmachine: (pause-448284) Calling .GetState
	I0929 12:15:15.732060  405898 fix.go:112] recreateIfNeeded on pause-448284: state=Running err=<nil>
	W0929 12:15:15.732080  405898 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 12:15:15.888882  405898 out.go:252] * Updating the running kvm2 "pause-448284" VM ...
	I0929 12:15:15.888982  405898 machine.go:93] provisionDockerMachine start ...
	I0929 12:15:15.889004  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:15.889375  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:15.893120  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:15.893671  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:15.893717  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:15.893986  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:15.894219  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:15.894449  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:15.894645  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:15.894853  405898 main.go:141] libmachine: Using SSH client type: native
	I0929 12:15:15.895169  405898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0929 12:15:15.895189  405898 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:15:16.013464  405898 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-448284
	
	I0929 12:15:16.013499  405898 main.go:141] libmachine: (pause-448284) Calling .GetMachineName
	I0929 12:15:16.013825  405898 buildroot.go:166] provisioning hostname "pause-448284"
	I0929 12:15:16.013864  405898 main.go:141] libmachine: (pause-448284) Calling .GetMachineName
	I0929 12:15:16.014082  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:16.017672  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.018193  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:16.018275  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.018395  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:16.018598  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:16.018801  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:16.019033  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:16.019225  405898 main.go:141] libmachine: Using SSH client type: native
	I0929 12:15:16.019537  405898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0929 12:15:16.019556  405898 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-448284 && echo "pause-448284" | sudo tee /etc/hostname
	I0929 12:15:16.152533  405898 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-448284
	
	I0929 12:15:16.152569  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:16.156483  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.157020  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:16.157081  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.157271  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:16.157508  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:16.157702  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:16.157924  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:16.158118  405898 main.go:141] libmachine: Using SSH client type: native
	I0929 12:15:16.158418  405898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0929 12:15:16.158439  405898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-448284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-448284/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-448284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:15:16.274613  405898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:15:16.274653  405898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21655-365455/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-365455/.minikube}
	I0929 12:15:16.274710  405898 buildroot.go:174] setting up certificates
	I0929 12:15:16.274722  405898 provision.go:84] configureAuth start
	I0929 12:15:16.274738  405898 main.go:141] libmachine: (pause-448284) Calling .GetMachineName
	I0929 12:15:16.275114  405898 main.go:141] libmachine: (pause-448284) Calling .GetIP
	I0929 12:15:16.280399  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.280932  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:16.281020  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.281235  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:16.284505  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.285057  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:16.285087  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.285276  405898 provision.go:143] copyHostCerts
	I0929 12:15:16.285346  405898 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem, removing ...
	I0929 12:15:16.285364  405898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem
	I0929 12:15:16.285417  405898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem (1078 bytes)
	I0929 12:15:16.285516  405898 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem, removing ...
	I0929 12:15:16.285525  405898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem
	I0929 12:15:16.285547  405898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem (1123 bytes)
	I0929 12:15:16.285605  405898 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem, removing ...
	I0929 12:15:16.285615  405898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem
	I0929 12:15:16.285645  405898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem (1675 bytes)
	I0929 12:15:16.285743  405898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem org=jenkins.pause-448284 san=[127.0.0.1 192.168.50.251 localhost minikube pause-448284]
	I0929 12:15:16.539550  405898 provision.go:177] copyRemoteCerts
	I0929 12:15:16.539662  405898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:15:16.539702  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:16.543040  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.543433  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:16.543466  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.543670  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:16.543864  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:16.544064  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:16.544231  405898 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/pause-448284/id_rsa Username:docker}
	I0929 12:15:16.629992  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 12:15:16.664530  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 12:15:16.698551  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 12:15:16.739421  405898 provision.go:87] duration metric: took 464.680086ms to configureAuth
	I0929 12:15:16.739496  405898 buildroot.go:189] setting minikube options for container-runtime
	I0929 12:15:16.739813  405898 config.go:182] Loaded profile config "pause-448284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:15:16.739920  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:16.743788  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.744217  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:16.744256  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:16.744692  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:16.744949  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:16.745171  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:16.745340  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:16.745551  405898 main.go:141] libmachine: Using SSH client type: native
	I0929 12:15:16.745860  405898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0929 12:15:16.745888  405898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 12:15:22.294361  405898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 12:15:22.294394  405898 machine.go:96] duration metric: took 6.405401831s to provisionDockerMachine
	I0929 12:15:22.294409  405898 start.go:293] postStartSetup for "pause-448284" (driver="kvm2")
	I0929 12:15:22.294423  405898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:15:22.294449  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:22.294868  405898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:15:22.294902  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:22.298891  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.299471  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:22.299512  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.299773  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:22.300001  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:22.300181  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:22.300347  405898 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/pause-448284/id_rsa Username:docker}
	I0929 12:15:22.401922  405898 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:15:22.409356  405898 info.go:137] Remote host: Buildroot 2025.02
	I0929 12:15:22.409400  405898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/addons for local assets ...
	I0929 12:15:22.409485  405898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/files for local assets ...
	I0929 12:15:22.409621  405898 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem -> 3694232.pem in /etc/ssl/certs
	I0929 12:15:22.409772  405898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:15:22.426175  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem --> /etc/ssl/certs/3694232.pem (1708 bytes)
	I0929 12:15:22.465561  405898 start.go:296] duration metric: took 171.132504ms for postStartSetup
	I0929 12:15:22.465614  405898 fix.go:56] duration metric: took 6.756473403s for fixHost
	I0929 12:15:22.465642  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:22.468966  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.469487  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:22.469532  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.469705  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:22.469920  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:22.470128  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:22.470303  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:22.470526  405898 main.go:141] libmachine: Using SSH client type: native
	I0929 12:15:22.470748  405898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.251 22 <nil> <nil>}
	I0929 12:15:22.470759  405898 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 12:15:22.585963  405898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759148122.580736337
	
	I0929 12:15:22.586005  405898 fix.go:216] guest clock: 1759148122.580736337
	I0929 12:15:22.586018  405898 fix.go:229] Guest: 2025-09-29 12:15:22.580736337 +0000 UTC Remote: 2025-09-29 12:15:22.465619628 +0000 UTC m=+13.965158981 (delta=115.116709ms)
	I0929 12:15:22.586052  405898 fix.go:200] guest clock delta is within tolerance: 115.116709ms
	I0929 12:15:22.586059  405898 start.go:83] releasing machines lock for "pause-448284", held for 6.876950424s
	I0929 12:15:22.586101  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:22.586389  405898 main.go:141] libmachine: (pause-448284) Calling .GetIP
	I0929 12:15:22.589888  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.590413  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:22.590455  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.590669  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:22.591229  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:22.591428  405898 main.go:141] libmachine: (pause-448284) Calling .DriverName
	I0929 12:15:22.591531  405898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:15:22.591599  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:22.591655  405898 ssh_runner.go:195] Run: cat /version.json
	I0929 12:15:22.591677  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHHostname
	I0929 12:15:22.595041  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.595120  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.595499  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:22.595544  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.595574  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:15:22.595593  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:15:22.595838  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:22.596065  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:22.596070  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHPort
	I0929 12:15:22.596284  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHKeyPath
	I0929 12:15:22.596288  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:22.596476  405898 main.go:141] libmachine: (pause-448284) Calling .GetSSHUsername
	I0929 12:15:22.596652  405898 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/pause-448284/id_rsa Username:docker}
	I0929 12:15:22.596715  405898 sshutil.go:53] new ssh client: &{IP:192.168.50.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/pause-448284/id_rsa Username:docker}
	I0929 12:15:22.682341  405898 ssh_runner.go:195] Run: systemctl --version
	I0929 12:15:22.716353  405898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 12:15:22.999063  405898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 12:15:23.021099  405898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 12:15:23.021192  405898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:15:23.041151  405898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:15:23.041183  405898 start.go:495] detecting cgroup driver to use...
	I0929 12:15:23.041291  405898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:15:23.093937  405898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:15:23.132530  405898 docker.go:218] disabling cri-docker service (if available) ...
	I0929 12:15:23.132611  405898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 12:15:23.163121  405898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 12:15:23.201841  405898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 12:15:23.543186  405898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 12:15:23.903598  405898 docker.go:234] disabling docker service ...
	I0929 12:15:23.903674  405898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 12:15:23.995877  405898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 12:15:24.041816  405898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 12:15:24.446610  405898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 12:15:24.763101  405898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:15:24.785403  405898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:15:24.823986  405898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 12:15:24.824069  405898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:15:24.843024  405898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 12:15:24.843115  405898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:15:24.863199  405898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:15:24.881537  405898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:15:24.896863  405898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:15:24.917349  405898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:15:24.935757  405898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:15:24.951390  405898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:15:24.976890  405898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:15:24.993900  405898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:15:25.010074  405898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:15:25.304990  405898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 12:16:55.847456  405898 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.542408175s)
	I0929 12:16:55.847506  405898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 12:16:55.847575  405898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 12:16:55.853953  405898 start.go:563] Will wait 60s for crictl version
	I0929 12:16:55.854048  405898 ssh_runner.go:195] Run: which crictl
	I0929 12:16:55.858467  405898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:16:55.898131  405898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 12:16:55.898224  405898 ssh_runner.go:195] Run: crio --version
	I0929 12:16:55.928490  405898 ssh_runner.go:195] Run: crio --version
	I0929 12:16:55.960930  405898 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0929 12:16:55.962024  405898 main.go:141] libmachine: (pause-448284) Calling .GetIP
	I0929 12:16:55.965423  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:16:55.965907  405898 main.go:141] libmachine: (pause-448284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ae:28", ip: ""} in network mk-pause-448284: {Iface:virbr2 ExpiryTime:2025-09-29 13:13:57 +0000 UTC Type:0 Mac:52:54:00:55:ae:28 Iaid: IPaddr:192.168.50.251 Prefix:24 Hostname:pause-448284 Clientid:01:52:54:00:55:ae:28}
	I0929 12:16:55.965958  405898 main.go:141] libmachine: (pause-448284) DBG | domain pause-448284 has defined IP address 192.168.50.251 and MAC address 52:54:00:55:ae:28 in network mk-pause-448284
	I0929 12:16:55.966219  405898 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0929 12:16:55.970994  405898 kubeadm.go:875] updating cluster {Name:pause-448284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-44828
4 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fal
se olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:16:55.971152  405898 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:16:55.971197  405898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:16:56.018198  405898 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:16:56.018226  405898 crio.go:433] Images already preloaded, skipping extraction
	I0929 12:16:56.018287  405898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:16:56.056404  405898 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:16:56.056433  405898 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:16:56.056442  405898 kubeadm.go:926] updating node { 192.168.50.251 8443 v1.34.0 crio true true} ...
	I0929 12:16:56.056555  405898 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-448284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-448284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:16:56.056653  405898 ssh_runner.go:195] Run: crio config
	I0929 12:16:56.159628  405898 cni.go:84] Creating CNI manager for ""
	I0929 12:16:56.159666  405898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:16:56.159682  405898 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:16:56.159717  405898 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.251 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-448284 NodeName:pause-448284 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:16:56.159906  405898 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-448284"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.251"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.251"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:16:56.159993  405898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:16:56.186472  405898 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:16:56.186558  405898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:16:56.217073  405898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 12:16:56.274516  405898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:16:56.333691  405898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0929 12:16:56.413393  405898 ssh_runner.go:195] Run: grep 192.168.50.251	control-plane.minikube.internal$ /etc/hosts
	I0929 12:16:56.425662  405898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:16:56.690641  405898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:16:56.714360  405898 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284 for IP: 192.168.50.251
	I0929 12:16:56.714382  405898 certs.go:194] generating shared ca certs ...
	I0929 12:16:56.714398  405898 certs.go:226] acquiring lock for ca certs: {Name:mk0b410c7c5424a4463d6cf6464227ce4eef65e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:16:56.714584  405898 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key
	I0929 12:16:56.714627  405898 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key
	I0929 12:16:56.714637  405898 certs.go:256] generating profile certs ...
	I0929 12:16:56.714716  405898 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/client.key
	I0929 12:16:56.714772  405898 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/apiserver.key.d618885c
	I0929 12:16:56.714808  405898 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/proxy-client.key
	I0929 12:16:56.714921  405898 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem (1338 bytes)
	W0929 12:16:56.714949  405898 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423_empty.pem, impossibly tiny 0 bytes
	I0929 12:16:56.714956  405898 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:16:56.715010  405898 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem (1078 bytes)
	I0929 12:16:56.715037  405898 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:16:56.715063  405898 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem (1675 bytes)
	I0929 12:16:56.715100  405898 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem (1708 bytes)
	I0929 12:16:56.715733  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:16:56.748676  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 12:16:56.783570  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:16:56.818429  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 12:16:56.853960  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 12:16:56.887426  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:16:56.921801  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:16:56.958988  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/pause-448284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:16:56.995546  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem --> /usr/share/ca-certificates/3694232.pem (1708 bytes)
	I0929 12:16:57.031527  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:16:57.062054  405898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem --> /usr/share/ca-certificates/369423.pem (1338 bytes)
	I0929 12:16:57.098781  405898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:16:57.123560  405898 ssh_runner.go:195] Run: openssl version
	I0929 12:16:57.130700  405898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:16:57.144703  405898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:16:57.149859  405898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:16:57.149942  405898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:16:57.157402  405898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:16:57.178571  405898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/369423.pem && ln -fs /usr/share/ca-certificates/369423.pem /etc/ssl/certs/369423.pem"
	I0929 12:16:57.192154  405898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/369423.pem
	I0929 12:16:57.198606  405898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:26 /usr/share/ca-certificates/369423.pem
	I0929 12:16:57.198734  405898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/369423.pem
	I0929 12:16:57.218862  405898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/369423.pem /etc/ssl/certs/51391683.0"
	I0929 12:16:57.232351  405898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3694232.pem && ln -fs /usr/share/ca-certificates/3694232.pem /etc/ssl/certs/3694232.pem"
	I0929 12:16:57.260836  405898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3694232.pem
	I0929 12:16:57.267391  405898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:26 /usr/share/ca-certificates/3694232.pem
	I0929 12:16:57.267474  405898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3694232.pem
	I0929 12:16:57.275591  405898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3694232.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:16:57.291212  405898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:16:57.300768  405898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:16:57.310739  405898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:16:57.322931  405898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:16:57.338830  405898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:16:57.349728  405898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:16:57.361576  405898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:16:57.370070  405898 kubeadm.go:392] StartCluster: {Name:pause-448284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-448284 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.251 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:16:57.370230  405898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 12:16:57.370332  405898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:16:57.437863  405898 cri.go:89] found id: "e92ea40987ef263104fc84da9b5fab382f0ca424dc09d428b06e20eb93c2447c"
	I0929 12:16:57.437894  405898 cri.go:89] found id: "1f47960f0db4d842151528da6c7020d59f4d6bede1042da342f257bc15eaf437"
	I0929 12:16:57.437902  405898 cri.go:89] found id: "e98a67edfdc65a81f26cbc0e77bfa203408717769b7fcfcad8430bdcfd69b548"
	I0929 12:16:57.437907  405898 cri.go:89] found id: "56401441f4cd486c3f109611885412061fd6d95603e4661822430b53e979b4c9"
	I0929 12:16:57.437910  405898 cri.go:89] found id: "f5d57fcd84b137bc2cabf619630723bdacad6955827512a828ae2a79965a3466"
	I0929 12:16:57.437921  405898 cri.go:89] found id: "0b9ac69f6bf6aa520160c113d51efdb95c05d442b7ed06d627b337b0fe5f1eca"
	I0929 12:16:57.437925  405898 cri.go:89] found id: "e3086beffb521da152392a5cb940207cd0c6f3e49c27456d6869c6f609f921cf"
	I0929 12:16:57.437929  405898 cri.go:89] found id: "948b9271b3533c244b8b8078effff4d6dc750055e7998f71476db3ff7100e454"
	I0929 12:16:57.437933  405898 cri.go:89] found id: "b62d8c960e306bcb4522237c42f35b6bd9667f47dbaca61f7236714011d4941f"
	I0929 12:16:57.437945  405898 cri.go:89] found id: "42ef5eec1187de1fe766a8eca027b3bbf23becf3e2500e1a778afd46f6b4a891"
	I0929 12:16:57.437950  405898 cri.go:89] found id: "e80b6bc5f079dd10c4c31605d9b8961c35d67136a78ef88b07ac431c5d8197e3"
	I0929 12:16:57.437954  405898 cri.go:89] found id: "75570a955f26cab71d1378a9be94e28601bd5e67f176343f91f975c2256e6de5"
	I0929 12:16:57.437963  405898 cri.go:89] found id: ""
	I0929 12:16:57.438061  405898 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-448284 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-448284 -n pause-448284
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-448284 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-448284 logs -n 25: (1.349154344s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────
────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────
────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-657893 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                              │ NoKubernetes-657893       │ jenkins │ v1.37.0 │ 29 Sep 25 12:16 UTC │ 29 Sep 25 12:16 UTC │
	│ start   │ -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                             │ kubernetes-upgrade-494977 │ jenkins │ v1.37.0 │ 29 Sep 25 12:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                      │ kubernetes-upgrade-494977 │ jenkins │ v1.37.0 │ 29 Sep 25 12:16 UTC │ 29 Sep 25 12:17 UTC │
	│ ssh     │ -p NoKubernetes-657893 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-657893       │ jenkins │ v1.37.0 │ 29 Sep 25 12:16 UTC │                     │
	│ delete  │ -p NoKubernetes-657893                                                                                                                                                                                                                                                  │ NoKubernetes-657893       │ jenkins │ v1.37.0 │ 29 Sep 25 12:16 UTC │ 29 Sep 25 12:16 UTC │
	│ start   │ -p running-upgrade-460754 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ running-upgrade-460754    │ jenkins │ v1.37.0 │ 29 Sep 25 12:16 UTC │ 29 Sep 25 12:17 UTC │
	│ start   │ -p force-systemd-flag-785669 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                   │ force-systemd-flag-785669 │ jenkins │ v1.37.0 │ 29 Sep 25 12:16 UTC │ 29 Sep 25 12:17 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-460754 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ running-upgrade-460754    │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │                     │
	│ delete  │ -p running-upgrade-460754                                                                                                                                                                                                                                               │ running-upgrade-460754    │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:17 UTC │
	│ start   │ -p force-systemd-env-554195 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                    │ force-systemd-env-554195  │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:18 UTC │
	│ delete  │ -p kubernetes-upgrade-494977                                                                                                                                                                                                                                            │ kubernetes-upgrade-494977 │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:17 UTC │
	│ start   │ -p cert-expiration-356327 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                        │ cert-expiration-356327    │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:18 UTC │
	│ ssh     │ force-systemd-flag-785669 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                                                    │ force-systemd-flag-785669 │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:17 UTC │
	│ delete  │ -p force-systemd-flag-785669                                                                                                                                                                                                                                            │ force-systemd-flag-785669 │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:17 UTC │
	│ start   │ -p cert-options-163071 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                     │ cert-options-163071       │ jenkins │ v1.37.0 │ 29 Sep 25 12:17 UTC │ 29 Sep 25 12:18 UTC │
	│ delete  │ -p force-systemd-env-554195                                                                                                                                                                                                                                             │ force-systemd-env-554195  │ jenkins │ v1.37.0 │ 29 Sep 25 12:18 UTC │ 29 Sep 25 12:18 UTC │
	│ start   │ -p old-k8s-version-832485 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-832485    │ jenkins │ v1.37.0 │ 29 Sep 25 12:18 UTC │ 29 Sep 25 12:19 UTC │
	│ ssh     │ cert-options-163071 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                                             │ cert-options-163071       │ jenkins │ v1.37.0 │ 29 Sep 25 12:18 UTC │ 29 Sep 25 12:18 UTC │
	│ ssh     │ -p cert-options-163071 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                                           │ cert-options-163071       │ jenkins │ v1.37.0 │ 29 Sep 25 12:18 UTC │ 29 Sep 25 12:18 UTC │
	│ delete  │ -p cert-options-163071                                                                                                                                                                                                                                                  │ cert-options-163071       │ jenkins │ v1.37.0 │ 29 Sep 25 12:18 UTC │ 29 Sep 25 12:18 UTC │
	│ start   │ -p embed-certs-046125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0                                                                                        │ embed-certs-046125        │ jenkins │ v1.37.0 │ 29 Sep 25 12:18 UTC │ 29 Sep 25 12:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-832485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                            │ old-k8s-version-832485    │ jenkins │ v1.37.0 │ 29 Sep 25 12:19 UTC │ 29 Sep 25 12:19 UTC │
	│ stop    │ -p old-k8s-version-832485 --alsologtostderr -v=3                                                                                                                                                                                                                        │ old-k8s-version-832485    │ jenkins │ v1.37.0 │ 29 Sep 25 12:19 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-046125 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                                │ embed-certs-046125        │ jenkins │ v1.37.0 │ 29 Sep 25 12:20 UTC │ 29 Sep 25 12:20 UTC │
	│ stop    │ -p embed-certs-046125 --alsologtostderr -v=3                                                                                                                                                                                                                            │ embed-certs-046125        │ jenkins │ v1.37.0 │ 29 Sep 25 12:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────
────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:18:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:18:47.035090  410531 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:18:47.035327  410531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:18:47.035335  410531 out.go:374] Setting ErrFile to fd 2...
	I0929 12:18:47.035338  410531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:18:47.035571  410531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 12:18:47.036080  410531 out.go:368] Setting JSON to false
	I0929 12:18:47.037060  410531 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7269,"bootTime":1759141058,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:18:47.037140  410531 start.go:140] virtualization: kvm guest
	I0929 12:18:47.039033  410531 out.go:179] * [embed-certs-046125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:18:47.040535  410531 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:18:47.040518  410531 notify.go:220] Checking for updates...
	I0929 12:18:47.042687  410531 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:18:47.043828  410531 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:18:47.045072  410531 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 12:18:47.046284  410531 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:18:47.047638  410531 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:18:47.049246  410531 config.go:182] Loaded profile config "cert-expiration-356327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:18:47.049374  410531 config.go:182] Loaded profile config "old-k8s-version-832485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0929 12:18:47.049535  410531 config.go:182] Loaded profile config "pause-448284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:18:47.049663  410531 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:18:47.092806  410531 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 12:18:47.093858  410531 start.go:304] selected driver: kvm2
	I0929 12:18:47.093880  410531 start.go:924] validating driver "kvm2" against <nil>
	I0929 12:18:47.093894  410531 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:18:47.094676  410531 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:18:47.094811  410531 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 12:18:47.108773  410531 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 12:18:47.108825  410531 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 12:18:47.122842  410531 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 12:18:47.122914  410531 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:18:47.123398  410531 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:18:47.123454  410531 cni.go:84] Creating CNI manager for ""
	I0929 12:18:47.123524  410531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:18:47.123543  410531 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 12:18:47.123626  410531 start.go:348] cluster config:
	{Name:embed-certs-046125 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-046125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0929 12:18:47.123788  410531 iso.go:125] acquiring lock: {Name:mkf6a4bd1628698e7eb4c42d44aa8328e64686d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:18:47.125568  410531 out.go:179] * Starting "embed-certs-046125" primary control-plane node in "embed-certs-046125" cluster
	I0929 12:18:47.126720  410531 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:18:47.126767  410531 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 12:18:47.126780  410531 cache.go:58] Caching tarball of preloaded images
	I0929 12:18:47.126898  410531 preload.go:172] Found /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 12:18:47.126913  410531 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 12:18:47.127056  410531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/config.json ...
	I0929 12:18:47.127104  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/config.json: {Name:mk35a3edfc8b04bfb04270f14e640b669cf31502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.127288  410531 start.go:360] acquireMachinesLock for embed-certs-046125: {Name:mk02e688f69f8dfa335651bd732d9d18b60c0952 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0929 12:18:47.127330  410531 start.go:364] duration metric: took 22.348µs to acquireMachinesLock for "embed-certs-046125"
	I0929 12:18:47.127354  410531 start.go:93] Provisioning new machine with config: &{Name:embed-certs-046125 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:embed-certs-046125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:18:47.127432  410531 start.go:125] createHost starting for "" (driver="kvm2")
	W0929 12:18:45.376514  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:18:47.876665  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:18:44.796606  409892 crio.go:462] duration metric: took 1.830679616s to copy over tarball
	I0929 12:18:44.796748  409892 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 12:18:46.803572  409892 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.006774143s)
	I0929 12:18:46.803610  409892 crio.go:469] duration metric: took 2.006962148s to extract the tarball
	I0929 12:18:46.803620  409892 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 12:18:46.851126  409892 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:18:46.896607  409892 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:18:46.896633  409892 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:18:46.896641  409892 kubeadm.go:926] updating node { 192.168.61.163 8443 v1.28.0 crio true true} ...
	I0929 12:18:46.896795  409892 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-832485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-832485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:18:46.896897  409892 ssh_runner.go:195] Run: crio config
	I0929 12:18:46.947493  409892 cni.go:84] Creating CNI manager for ""
	I0929 12:18:46.947517  409892 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:18:46.947532  409892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:18:46.947555  409892 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.163 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-832485 NodeName:old-k8s-version-832485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:18:46.947790  409892 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-832485"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:18:46.948195  409892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0929 12:18:46.961661  409892 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:18:46.961741  409892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:18:46.973777  409892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0929 12:18:46.998456  409892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:18:47.021776  409892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0929 12:18:47.044851  409892 ssh_runner.go:195] Run: grep 192.168.61.163	control-plane.minikube.internal$ /etc/hosts
	I0929 12:18:47.049663  409892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.163	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:18:47.069743  409892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:18:47.230375  409892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:18:47.286483  409892 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485 for IP: 192.168.61.163
	I0929 12:18:47.286509  409892 certs.go:194] generating shared ca certs ...
	I0929 12:18:47.286525  409892 certs.go:226] acquiring lock for ca certs: {Name:mk0b410c7c5424a4463d6cf6464227ce4eef65e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.286729  409892 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key
	I0929 12:18:47.286795  409892 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key
	I0929 12:18:47.286810  409892 certs.go:256] generating profile certs ...
	I0929 12:18:47.286888  409892 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.key
	I0929 12:18:47.286906  409892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt with IP's: []
	I0929 12:18:47.483868  409892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt ...
	I0929 12:18:47.483902  409892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: {Name:mk95c6fcb6ebdb91358e0c344f68b7069c1ea536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.484123  409892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.key ...
	I0929 12:18:47.484143  409892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.key: {Name:mkc64f368e62d9e960fac7205e079f4b985a5eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.484266  409892 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.key.a1a0b062
	I0929 12:18:47.484286  409892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.crt.a1a0b062 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.163]
	I0929 12:18:47.606291  409892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.crt.a1a0b062 ...
	I0929 12:18:47.606323  409892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.crt.a1a0b062: {Name:mkb55fb8a2492a64611495cd01ef7de23f703709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.606523  409892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.key.a1a0b062 ...
	I0929 12:18:47.606542  409892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.key.a1a0b062: {Name:mk55c58040612df69155acbf82e0634225ab987a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.606653  409892 certs.go:381] copying /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.crt.a1a0b062 -> /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.crt
	I0929 12:18:47.606759  409892 certs.go:385] copying /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.key.a1a0b062 -> /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.key
	I0929 12:18:47.606824  409892 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.key
	I0929 12:18:47.606840  409892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.crt with IP's: []
	I0929 12:18:47.711338  409892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.crt ...
	I0929 12:18:47.711371  409892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.crt: {Name:mk98d12996b7485fddf26981caf7814f041914ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.711603  409892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.key ...
	I0929 12:18:47.711625  409892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.key: {Name:mkdfcbfa827c41e8728880be861ae8a89b856200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:18:47.711939  409892 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem (1338 bytes)
	W0929 12:18:47.711998  409892 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423_empty.pem, impossibly tiny 0 bytes
	I0929 12:18:47.712010  409892 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:18:47.712033  409892 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem (1078 bytes)
	I0929 12:18:47.712053  409892 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:18:47.712070  409892 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem (1675 bytes)
	I0929 12:18:47.712103  409892 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem (1708 bytes)
	I0929 12:18:47.712803  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:18:47.746574  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 12:18:47.779156  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:18:47.812229  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 12:18:47.844597  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0929 12:18:47.874909  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:18:47.904395  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:18:47.934589  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 12:18:47.978385  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem --> /usr/share/ca-certificates/369423.pem (1338 bytes)
	I0929 12:18:48.011346  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem --> /usr/share/ca-certificates/3694232.pem (1708 bytes)
	I0929 12:18:48.050279  409892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:18:48.081454  409892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:18:48.107907  409892 ssh_runner.go:195] Run: openssl version
	I0929 12:18:48.114735  409892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/369423.pem && ln -fs /usr/share/ca-certificates/369423.pem /etc/ssl/certs/369423.pem"
	I0929 12:18:48.128274  409892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/369423.pem
	I0929 12:18:48.134331  409892 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:26 /usr/share/ca-certificates/369423.pem
	I0929 12:18:48.134411  409892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/369423.pem
	I0929 12:18:48.141637  409892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/369423.pem /etc/ssl/certs/51391683.0"
	I0929 12:18:48.156208  409892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3694232.pem && ln -fs /usr/share/ca-certificates/3694232.pem /etc/ssl/certs/3694232.pem"
	I0929 12:18:48.169642  409892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3694232.pem
	I0929 12:18:48.175245  409892 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:26 /usr/share/ca-certificates/3694232.pem
	I0929 12:18:48.175323  409892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3694232.pem
	I0929 12:18:48.185469  409892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3694232.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:18:48.204639  409892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:18:48.221819  409892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:18:48.228910  409892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:18:48.228994  409892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:18:48.239136  409892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:18:48.256122  409892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:18:48.261753  409892 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 12:18:48.261816  409892 kubeadm.go:392] StartCluster: {Name:old-k8s-version-832485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k
8s-version-832485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.163 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:18:48.261888  409892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 12:18:48.261947  409892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:18:48.310458  409892 cri.go:89] found id: ""
	I0929 12:18:48.310529  409892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:18:48.325578  409892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:18:48.340281  409892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:18:48.354676  409892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 12:18:48.354695  409892 kubeadm.go:157] found existing configuration files:
	
	I0929 12:18:48.354741  409892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 12:18:48.368328  409892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 12:18:48.368404  409892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 12:18:48.381294  409892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 12:18:48.392588  409892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 12:18:48.392669  409892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:18:48.406056  409892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 12:18:48.417819  409892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 12:18:48.417882  409892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:18:48.429354  409892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 12:18:48.440774  409892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 12:18:48.440841  409892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:18:48.452650  409892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 12:18:48.509771  409892 kubeadm.go:310] [init] Using Kubernetes version: v1.28.0
	I0929 12:18:48.509831  409892 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 12:18:48.639999  409892 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 12:18:48.640164  409892 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 12:18:48.640333  409892 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0929 12:18:48.883555  409892 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 12:18:49.109062  409892 out.go:252]   - Generating certificates and keys ...
	I0929 12:18:49.109195  409892 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 12:18:49.109308  409892 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 12:18:49.109413  409892 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 12:18:49.223310  409892 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 12:18:49.372630  409892 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 12:18:49.607719  409892 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 12:18:49.908266  409892 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 12:18:49.909387  409892 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-832485] and IPs [192.168.61.163 127.0.0.1 ::1]
	I0929 12:18:50.186571  409892 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 12:18:50.186818  409892 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-832485] and IPs [192.168.61.163 127.0.0.1 ::1]
	I0929 12:18:50.280196  409892 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 12:18:50.338054  409892 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 12:18:50.436804  409892 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 12:18:50.437020  409892 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 12:18:50.725614  409892 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 12:18:51.035150  409892 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 12:18:51.283466  409892 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 12:18:51.489519  409892 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 12:18:51.489651  409892 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 12:18:51.492354  409892 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 12:18:47.128987  410531 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0929 12:18:47.129153  410531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:18:47.129206  410531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:18:47.142707  410531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0929 12:18:47.143314  410531 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:18:47.143950  410531 main.go:141] libmachine: Using API Version  1
	I0929 12:18:47.143987  410531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:18:47.144349  410531 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:18:47.144564  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetMachineName
	I0929 12:18:47.144716  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:18:47.144871  410531 start.go:159] libmachine.API.Create for "embed-certs-046125" (driver="kvm2")
	I0929 12:18:47.144925  410531 client.go:168] LocalClient.Create starting
	I0929 12:18:47.144963  410531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem
	I0929 12:18:47.145011  410531 main.go:141] libmachine: Decoding PEM data...
	I0929 12:18:47.145032  410531 main.go:141] libmachine: Parsing certificate...
	I0929 12:18:47.145104  410531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem
	I0929 12:18:47.145139  410531 main.go:141] libmachine: Decoding PEM data...
	I0929 12:18:47.145160  410531 main.go:141] libmachine: Parsing certificate...
	I0929 12:18:47.145191  410531 main.go:141] libmachine: Running pre-create checks...
	I0929 12:18:47.145203  410531 main.go:141] libmachine: (embed-certs-046125) Calling .PreCreateCheck
	I0929 12:18:47.145620  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetConfigRaw
	I0929 12:18:47.146048  410531 main.go:141] libmachine: Creating machine...
	I0929 12:18:47.146061  410531 main.go:141] libmachine: (embed-certs-046125) Calling .Create
	I0929 12:18:47.146189  410531 main.go:141] libmachine: (embed-certs-046125) creating domain...
	I0929 12:18:47.146205  410531 main.go:141] libmachine: (embed-certs-046125) creating network...
	I0929 12:18:47.147759  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found existing default network
	I0929 12:18:47.147961  410531 main.go:141] libmachine: (embed-certs-046125) DBG | <network connections='3'>
	I0929 12:18:47.148002  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <name>default</name>
	I0929 12:18:47.148027  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0929 12:18:47.148043  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <forward mode='nat'>
	I0929 12:18:47.148071  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <nat>
	I0929 12:18:47.148096  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <port start='1024' end='65535'/>
	I0929 12:18:47.148106  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </nat>
	I0929 12:18:47.148119  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   </forward>
	I0929 12:18:47.148173  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0929 12:18:47.148198  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0929 12:18:47.148213  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0929 12:18:47.148221  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <dhcp>
	I0929 12:18:47.148238  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0929 12:18:47.148251  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </dhcp>
	I0929 12:18:47.148260  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   </ip>
	I0929 12:18:47.148269  410531 main.go:141] libmachine: (embed-certs-046125) DBG | </network>
	I0929 12:18:47.148279  410531 main.go:141] libmachine: (embed-certs-046125) DBG | 
	I0929 12:18:47.150584  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.150421  410560 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0929 12:18:47.151228  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.151152  410560 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:37:20} reservation:<nil>}
	I0929 12:18:47.151762  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.151679  410560 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:59:44} reservation:<nil>}
	I0929 12:18:47.152143  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.152055  410560 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:cb:24:8e} reservation:<nil>}
	I0929 12:18:47.152842  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.152766  410560 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003a6760}
	I0929 12:18:47.152865  410531 main.go:141] libmachine: (embed-certs-046125) DBG | defining private network:
	I0929 12:18:47.152877  410531 main.go:141] libmachine: (embed-certs-046125) DBG | 
	I0929 12:18:47.152886  410531 main.go:141] libmachine: (embed-certs-046125) DBG | <network>
	I0929 12:18:47.152896  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <name>mk-embed-certs-046125</name>
	I0929 12:18:47.152914  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <dns enable='no'/>
	I0929 12:18:47.152924  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0929 12:18:47.152934  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <dhcp>
	I0929 12:18:47.152944  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0929 12:18:47.152958  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </dhcp>
	I0929 12:18:47.153000  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   </ip>
	I0929 12:18:47.153025  410531 main.go:141] libmachine: (embed-certs-046125) DBG | </network>
	I0929 12:18:47.153041  410531 main.go:141] libmachine: (embed-certs-046125) DBG | 
	I0929 12:18:47.158359  410531 main.go:141] libmachine: (embed-certs-046125) DBG | creating private network mk-embed-certs-046125 192.168.83.0/24...
	I0929 12:18:47.239057  410531 main.go:141] libmachine: (embed-certs-046125) DBG | private network mk-embed-certs-046125 192.168.83.0/24 created
	I0929 12:18:47.239416  410531 main.go:141] libmachine: (embed-certs-046125) DBG | <network>
	I0929 12:18:47.239440  410531 main.go:141] libmachine: (embed-certs-046125) setting up store path in /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125 ...
	I0929 12:18:47.239450  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <name>mk-embed-certs-046125</name>
	I0929 12:18:47.239464  410531 main.go:141] libmachine: (embed-certs-046125) building disk image from file:///home/jenkins/minikube-integration/21655-365455/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 12:18:47.239477  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <uuid>1cbf97c6-8337-4cea-89a6-960c242e82d8</uuid>
	I0929 12:18:47.239491  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0929 12:18:47.239504  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <mac address='52:54:00:7e:fe:16'/>
	I0929 12:18:47.239535  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <dns enable='no'/>
	I0929 12:18:47.239561  410531 main.go:141] libmachine: (embed-certs-046125) Downloading /home/jenkins/minikube-integration/21655-365455/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21655-365455/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I0929 12:18:47.239573  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0929 12:18:47.239587  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <dhcp>
	I0929 12:18:47.239596  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0929 12:18:47.239609  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </dhcp>
	I0929 12:18:47.239618  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   </ip>
	I0929 12:18:47.239627  410531 main.go:141] libmachine: (embed-certs-046125) DBG | </network>
	I0929 12:18:47.239635  410531 main.go:141] libmachine: (embed-certs-046125) DBG | 
	I0929 12:18:47.239662  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.239406  410560 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 12:18:47.516845  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.516697  410560 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa...
	I0929 12:18:47.616417  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.616281  410560 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/embed-certs-046125.rawdisk...
	I0929 12:18:47.616465  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Writing magic tar header
	I0929 12:18:47.616481  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Writing SSH key tar header
	I0929 12:18:47.616494  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:47.616420  410560 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125 ...
	I0929 12:18:47.616511  410531 main.go:141] libmachine: (embed-certs-046125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125
	I0929 12:18:47.616533  410531 main.go:141] libmachine: (embed-certs-046125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455/.minikube/machines
	I0929 12:18:47.616550  410531 main.go:141] libmachine: (embed-certs-046125) setting executable bit set on /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125 (perms=drwx------)
	I0929 12:18:47.616568  410531 main.go:141] libmachine: (embed-certs-046125) setting executable bit set on /home/jenkins/minikube-integration/21655-365455/.minikube/machines (perms=drwxr-xr-x)
	I0929 12:18:47.616582  410531 main.go:141] libmachine: (embed-certs-046125) setting executable bit set on /home/jenkins/minikube-integration/21655-365455/.minikube (perms=drwxr-xr-x)
	I0929 12:18:47.616599  410531 main.go:141] libmachine: (embed-certs-046125) setting executable bit set on /home/jenkins/minikube-integration/21655-365455 (perms=drwxrwxr-x)
	I0929 12:18:47.616613  410531 main.go:141] libmachine: (embed-certs-046125) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0929 12:18:47.616638  410531 main.go:141] libmachine: (embed-certs-046125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 12:18:47.616652  410531 main.go:141] libmachine: (embed-certs-046125) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0929 12:18:47.616662  410531 main.go:141] libmachine: (embed-certs-046125) defining domain...
	I0929 12:18:47.616677  410531 main.go:141] libmachine: (embed-certs-046125) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21655-365455
	I0929 12:18:47.616689  410531 main.go:141] libmachine: (embed-certs-046125) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0929 12:18:47.616703  410531 main.go:141] libmachine: (embed-certs-046125) DBG | checking permissions on dir: /home/jenkins
	I0929 12:18:47.616715  410531 main.go:141] libmachine: (embed-certs-046125) DBG | checking permissions on dir: /home
	I0929 12:18:47.616729  410531 main.go:141] libmachine: (embed-certs-046125) DBG | skipping /home - not owner
	I0929 12:18:47.617986  410531 main.go:141] libmachine: (embed-certs-046125) defining domain using XML: 
	I0929 12:18:47.618013  410531 main.go:141] libmachine: (embed-certs-046125) <domain type='kvm'>
	I0929 12:18:47.618024  410531 main.go:141] libmachine: (embed-certs-046125)   <name>embed-certs-046125</name>
	I0929 12:18:47.618031  410531 main.go:141] libmachine: (embed-certs-046125)   <memory unit='MiB'>3072</memory>
	I0929 12:18:47.618037  410531 main.go:141] libmachine: (embed-certs-046125)   <vcpu>2</vcpu>
	I0929 12:18:47.618046  410531 main.go:141] libmachine: (embed-certs-046125)   <features>
	I0929 12:18:47.618051  410531 main.go:141] libmachine: (embed-certs-046125)     <acpi/>
	I0929 12:18:47.618057  410531 main.go:141] libmachine: (embed-certs-046125)     <apic/>
	I0929 12:18:47.618062  410531 main.go:141] libmachine: (embed-certs-046125)     <pae/>
	I0929 12:18:47.618067  410531 main.go:141] libmachine: (embed-certs-046125)   </features>
	I0929 12:18:47.618072  410531 main.go:141] libmachine: (embed-certs-046125)   <cpu mode='host-passthrough'>
	I0929 12:18:47.618076  410531 main.go:141] libmachine: (embed-certs-046125)   </cpu>
	I0929 12:18:47.618081  410531 main.go:141] libmachine: (embed-certs-046125)   <os>
	I0929 12:18:47.618087  410531 main.go:141] libmachine: (embed-certs-046125)     <type>hvm</type>
	I0929 12:18:47.618092  410531 main.go:141] libmachine: (embed-certs-046125)     <boot dev='cdrom'/>
	I0929 12:18:47.618101  410531 main.go:141] libmachine: (embed-certs-046125)     <boot dev='hd'/>
	I0929 12:18:47.618106  410531 main.go:141] libmachine: (embed-certs-046125)     <bootmenu enable='no'/>
	I0929 12:18:47.618110  410531 main.go:141] libmachine: (embed-certs-046125)   </os>
	I0929 12:18:47.618127  410531 main.go:141] libmachine: (embed-certs-046125)   <devices>
	I0929 12:18:47.618135  410531 main.go:141] libmachine: (embed-certs-046125)     <disk type='file' device='cdrom'>
	I0929 12:18:47.618143  410531 main.go:141] libmachine: (embed-certs-046125)       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/boot2docker.iso'/>
	I0929 12:18:47.618148  410531 main.go:141] libmachine: (embed-certs-046125)       <target dev='hdc' bus='scsi'/>
	I0929 12:18:47.618155  410531 main.go:141] libmachine: (embed-certs-046125)       <readonly/>
	I0929 12:18:47.618159  410531 main.go:141] libmachine: (embed-certs-046125)     </disk>
	I0929 12:18:47.618168  410531 main.go:141] libmachine: (embed-certs-046125)     <disk type='file' device='disk'>
	I0929 12:18:47.618173  410531 main.go:141] libmachine: (embed-certs-046125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0929 12:18:47.618206  410531 main.go:141] libmachine: (embed-certs-046125)       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/embed-certs-046125.rawdisk'/>
	I0929 12:18:47.618231  410531 main.go:141] libmachine: (embed-certs-046125)       <target dev='hda' bus='virtio'/>
	I0929 12:18:47.618243  410531 main.go:141] libmachine: (embed-certs-046125)     </disk>
	I0929 12:18:47.618254  410531 main.go:141] libmachine: (embed-certs-046125)     <interface type='network'>
	I0929 12:18:47.618268  410531 main.go:141] libmachine: (embed-certs-046125)       <source network='mk-embed-certs-046125'/>
	I0929 12:18:47.618293  410531 main.go:141] libmachine: (embed-certs-046125)       <model type='virtio'/>
	I0929 12:18:47.618306  410531 main.go:141] libmachine: (embed-certs-046125)     </interface>
	I0929 12:18:47.618317  410531 main.go:141] libmachine: (embed-certs-046125)     <interface type='network'>
	I0929 12:18:47.618329  410531 main.go:141] libmachine: (embed-certs-046125)       <source network='default'/>
	I0929 12:18:47.618343  410531 main.go:141] libmachine: (embed-certs-046125)       <model type='virtio'/>
	I0929 12:18:47.618355  410531 main.go:141] libmachine: (embed-certs-046125)     </interface>
	I0929 12:18:47.618364  410531 main.go:141] libmachine: (embed-certs-046125)     <serial type='pty'>
	I0929 12:18:47.618377  410531 main.go:141] libmachine: (embed-certs-046125)       <target port='0'/>
	I0929 12:18:47.618385  410531 main.go:141] libmachine: (embed-certs-046125)     </serial>
	I0929 12:18:47.618395  410531 main.go:141] libmachine: (embed-certs-046125)     <console type='pty'>
	I0929 12:18:47.618406  410531 main.go:141] libmachine: (embed-certs-046125)       <target type='serial' port='0'/>
	I0929 12:18:47.618415  410531 main.go:141] libmachine: (embed-certs-046125)     </console>
	I0929 12:18:47.618430  410531 main.go:141] libmachine: (embed-certs-046125)     <rng model='virtio'>
	I0929 12:18:47.618444  410531 main.go:141] libmachine: (embed-certs-046125)       <backend model='random'>/dev/random</backend>
	I0929 12:18:47.618453  410531 main.go:141] libmachine: (embed-certs-046125)     </rng>
	I0929 12:18:47.618463  410531 main.go:141] libmachine: (embed-certs-046125)   </devices>
	I0929 12:18:47.618482  410531 main.go:141] libmachine: (embed-certs-046125) </domain>
	I0929 12:18:47.618497  410531 main.go:141] libmachine: (embed-certs-046125) 
	I0929 12:18:47.623287  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6b:99:66 in network default
	I0929 12:18:47.624054  410531 main.go:141] libmachine: (embed-certs-046125) starting domain...
	I0929 12:18:47.624074  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:47.624081  410531 main.go:141] libmachine: (embed-certs-046125) ensuring networks are active...
	I0929 12:18:47.624891  410531 main.go:141] libmachine: (embed-certs-046125) Ensuring network default is active
	I0929 12:18:47.625295  410531 main.go:141] libmachine: (embed-certs-046125) Ensuring network mk-embed-certs-046125 is active
	I0929 12:18:47.626051  410531 main.go:141] libmachine: (embed-certs-046125) getting domain XML...
	I0929 12:18:47.627198  410531 main.go:141] libmachine: (embed-certs-046125) DBG | starting domain XML:
	I0929 12:18:47.627211  410531 main.go:141] libmachine: (embed-certs-046125) DBG | <domain type='kvm'>
	I0929 12:18:47.627221  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <name>embed-certs-046125</name>
	I0929 12:18:47.627230  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <uuid>5413b745-10b8-4c2b-affd-812916111579</uuid>
	I0929 12:18:47.627239  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <memory unit='KiB'>3145728</memory>
	I0929 12:18:47.627251  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0929 12:18:47.627277  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <vcpu placement='static'>2</vcpu>
	I0929 12:18:47.627293  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <os>
	I0929 12:18:47.627306  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0929 12:18:47.627316  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <boot dev='cdrom'/>
	I0929 12:18:47.627328  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <boot dev='hd'/>
	I0929 12:18:47.627342  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <bootmenu enable='no'/>
	I0929 12:18:47.627353  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   </os>
	I0929 12:18:47.627362  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <features>
	I0929 12:18:47.627407  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <acpi/>
	I0929 12:18:47.627430  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <apic/>
	I0929 12:18:47.627441  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <pae/>
	I0929 12:18:47.627451  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   </features>
	I0929 12:18:47.627462  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0929 12:18:47.627470  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <clock offset='utc'/>
	I0929 12:18:47.627479  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <on_poweroff>destroy</on_poweroff>
	I0929 12:18:47.627490  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <on_reboot>restart</on_reboot>
	I0929 12:18:47.627512  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <on_crash>destroy</on_crash>
	I0929 12:18:47.627550  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   <devices>
	I0929 12:18:47.627563  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0929 12:18:47.627568  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <disk type='file' device='cdrom'>
	I0929 12:18:47.627574  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <driver name='qemu' type='raw'/>
	I0929 12:18:47.627583  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/boot2docker.iso'/>
	I0929 12:18:47.627592  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <target dev='hdc' bus='scsi'/>
	I0929 12:18:47.627596  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <readonly/>
	I0929 12:18:47.627605  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0929 12:18:47.627612  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </disk>
	I0929 12:18:47.627618  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <disk type='file' device='disk'>
	I0929 12:18:47.627641  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0929 12:18:47.627656  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <source file='/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/embed-certs-046125.rawdisk'/>
	I0929 12:18:47.627666  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <target dev='hda' bus='virtio'/>
	I0929 12:18:47.627680  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0929 12:18:47.627690  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </disk>
	I0929 12:18:47.627696  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0929 12:18:47.627707  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0929 12:18:47.627713  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </controller>
	I0929 12:18:47.627718  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0929 12:18:47.627726  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0929 12:18:47.627732  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0929 12:18:47.627740  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </controller>
	I0929 12:18:47.627744  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <interface type='network'>
	I0929 12:18:47.627752  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <mac address='52:54:00:6c:2b:98'/>
	I0929 12:18:47.627757  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <source network='mk-embed-certs-046125'/>
	I0929 12:18:47.627765  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <model type='virtio'/>
	I0929 12:18:47.627770  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0929 12:18:47.627777  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </interface>
	I0929 12:18:47.627782  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <interface type='network'>
	I0929 12:18:47.627789  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <mac address='52:54:00:6b:99:66'/>
	I0929 12:18:47.627794  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <source network='default'/>
	I0929 12:18:47.627799  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <model type='virtio'/>
	I0929 12:18:47.627805  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0929 12:18:47.627817  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </interface>
	I0929 12:18:47.627824  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <serial type='pty'>
	I0929 12:18:47.627830  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <target type='isa-serial' port='0'>
	I0929 12:18:47.627837  410531 main.go:141] libmachine: (embed-certs-046125) DBG |         <model name='isa-serial'/>
	I0929 12:18:47.627843  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       </target>
	I0929 12:18:47.627847  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </serial>
	I0929 12:18:47.627852  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <console type='pty'>
	I0929 12:18:47.627859  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <target type='serial' port='0'/>
	I0929 12:18:47.627864  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </console>
	I0929 12:18:47.627868  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <input type='mouse' bus='ps2'/>
	I0929 12:18:47.627874  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <input type='keyboard' bus='ps2'/>
	I0929 12:18:47.627878  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <audio id='1' type='none'/>
	I0929 12:18:47.627883  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <memballoon model='virtio'>
	I0929 12:18:47.627889  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0929 12:18:47.627898  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </memballoon>
	I0929 12:18:47.627905  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     <rng model='virtio'>
	I0929 12:18:47.627915  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <backend model='random'>/dev/random</backend>
	I0929 12:18:47.627923  410531 main.go:141] libmachine: (embed-certs-046125) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0929 12:18:47.627947  410531 main.go:141] libmachine: (embed-certs-046125) DBG |     </rng>
	I0929 12:18:47.627964  410531 main.go:141] libmachine: (embed-certs-046125) DBG |   </devices>
	I0929 12:18:47.627995  410531 main.go:141] libmachine: (embed-certs-046125) DBG | </domain>
	I0929 12:18:47.628010  410531 main.go:141] libmachine: (embed-certs-046125) DBG | 
	I0929 12:18:49.642202  410531 main.go:141] libmachine: (embed-certs-046125) waiting for domain to start...
	I0929 12:18:49.643476  410531 main.go:141] libmachine: (embed-certs-046125) domain is now running
	I0929 12:18:49.643498  410531 main.go:141] libmachine: (embed-certs-046125) waiting for IP...
	I0929 12:18:49.644364  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:49.645072  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:49.645098  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:49.645480  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:49.645511  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:49.645462  410560 retry.go:31] will retry after 273.044616ms: waiting for domain to come up
	I0929 12:18:49.920373  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:49.920994  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:49.921021  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:49.921385  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:49.921412  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:49.921362  410560 retry.go:31] will retry after 373.176554ms: waiting for domain to come up
	I0929 12:18:50.296735  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:50.297383  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:50.297411  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:50.297781  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:50.297812  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:50.297749  410560 retry.go:31] will retry after 417.44351ms: waiting for domain to come up
	I0929 12:18:50.717441  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:50.718240  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:50.718266  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:50.718663  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:50.718721  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:50.718638  410560 retry.go:31] will retry after 387.145598ms: waiting for domain to come up
	I0929 12:18:51.107380  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:51.108080  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:51.108104  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:51.108444  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:51.108501  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:51.108419  410560 retry.go:31] will retry after 536.391239ms: waiting for domain to come up
	I0929 12:18:51.646283  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:51.647050  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:51.647081  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:51.647445  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:51.647476  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:51.647413  410560 retry.go:31] will retry after 820.634819ms: waiting for domain to come up
	W0929 12:18:49.877835  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:18:52.374696  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:18:51.495632  409892 out.go:252]   - Booting up control plane ...
	I0929 12:18:51.495766  409892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 12:18:51.495874  409892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 12:18:51.496024  409892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 12:18:51.526431  409892 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 12:18:51.527703  409892 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 12:18:51.527786  409892 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 12:18:51.721493  409892 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0929 12:18:52.469953  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:52.470564  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:52.470616  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:52.470912  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:52.470994  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:52.470889  410560 retry.go:31] will retry after 827.419362ms: waiting for domain to come up
	I0929 12:18:53.300725  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:53.301489  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:53.301529  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:53.302048  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:53.302166  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:53.302051  410560 retry.go:31] will retry after 1.40382662s: waiting for domain to come up
	I0929 12:18:54.707441  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:54.708158  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:54.708181  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:54.708594  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:54.708620  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:54.708561  410560 retry.go:31] will retry after 1.262556228s: waiting for domain to come up
	I0929 12:18:55.974068  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:55.974818  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:55.974847  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:55.975306  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:55.975370  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:55.975308  410560 retry.go:31] will retry after 2.097471684s: waiting for domain to come up
	W0929 12:18:54.376361  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:18:56.377720  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:18:58.222171  409892 kubeadm.go:310] [apiclient] All control plane components are healthy after 6.504624 seconds
	I0929 12:18:58.222349  409892 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 12:18:58.247131  409892 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 12:18:58.777381  409892 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 12:18:58.777639  409892 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-832485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 12:18:59.295123  409892 kubeadm.go:310] [bootstrap-token] Using token: wm7t63.4o4mancppbgz4t39
	I0929 12:18:59.296412  409892 out.go:252]   - Configuring RBAC rules ...
	I0929 12:18:59.296555  409892 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 12:18:59.304354  409892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 12:18:59.311962  409892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 12:18:59.317121  409892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 12:18:59.327055  409892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 12:18:59.336342  409892 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 12:18:59.354167  409892 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 12:18:59.665323  409892 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 12:18:59.812786  409892 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 12:18:59.813949  409892 kubeadm.go:310] 
	I0929 12:18:59.814080  409892 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 12:18:59.814095  409892 kubeadm.go:310] 
	I0929 12:18:59.814230  409892 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 12:18:59.814250  409892 kubeadm.go:310] 
	I0929 12:18:59.814283  409892 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 12:18:59.814385  409892 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 12:18:59.814479  409892 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 12:18:59.814491  409892 kubeadm.go:310] 
	I0929 12:18:59.814581  409892 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 12:18:59.814594  409892 kubeadm.go:310] 
	I0929 12:18:59.814672  409892 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 12:18:59.814682  409892 kubeadm.go:310] 
	I0929 12:18:59.814757  409892 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 12:18:59.814874  409892 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 12:18:59.815007  409892 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 12:18:59.815017  409892 kubeadm.go:310] 
	I0929 12:18:59.815140  409892 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 12:18:59.815253  409892 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 12:18:59.815265  409892 kubeadm.go:310] 
	I0929 12:18:59.815402  409892 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wm7t63.4o4mancppbgz4t39 \
	I0929 12:18:59.815536  409892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6584cfb39d6d521de94c50ba68c73bacf142e1b11809c32d2bb4689966c9f242 \
	I0929 12:18:59.815560  409892 kubeadm.go:310] 	--control-plane 
	I0929 12:18:59.815566  409892 kubeadm.go:310] 
	I0929 12:18:59.815697  409892 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 12:18:59.815707  409892 kubeadm.go:310] 
	I0929 12:18:59.815845  409892 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wm7t63.4o4mancppbgz4t39 \
	I0929 12:18:59.816010  409892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6584cfb39d6d521de94c50ba68c73bacf142e1b11809c32d2bb4689966c9f242 
	I0929 12:18:59.819142  409892 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 12:18:59.819190  409892 cni.go:84] Creating CNI manager for ""
	I0929 12:18:59.819204  409892 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:18:59.820967  409892 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 12:18:58.075116  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:18:58.075795  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:18:58.075819  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:18:58.076259  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:18:58.076286  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:18:58.076241  410560 retry.go:31] will retry after 2.619630805s: waiting for domain to come up
	I0929 12:19:00.699003  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:00.699595  410531 main.go:141] libmachine: (embed-certs-046125) DBG | no network interface addresses found for domain embed-certs-046125 (source=lease)
	I0929 12:19:00.699629  410531 main.go:141] libmachine: (embed-certs-046125) DBG | trying to list again with source=arp
	I0929 12:19:00.700104  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find current IP address of domain embed-certs-046125 in network mk-embed-certs-046125 (interfaces detected: [])
	I0929 12:19:00.700131  410531 main.go:141] libmachine: (embed-certs-046125) DBG | I0929 12:19:00.700084  410560 retry.go:31] will retry after 3.131805836s: waiting for domain to come up
	W0929 12:18:58.878931  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:01.377420  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:18:59.822277  409892 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 12:18:59.871235  409892 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 12:18:59.928771  409892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:18:59.928838  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:18:59.928883  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-832485 minikube.k8s.io/updated_at=2025_09_29T12_18_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf minikube.k8s.io/name=old-k8s-version-832485 minikube.k8s.io/primary=true
	I0929 12:19:00.119818  409892 ops.go:34] apiserver oom_adj: -16
	I0929 12:19:00.119847  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:00.620713  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:01.120426  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:01.620475  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:02.120189  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:02.620061  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:03.120138  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:03.620009  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:03.834636  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:03.835344  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has current primary IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:03.835358  410531 main.go:141] libmachine: (embed-certs-046125) found domain IP: 192.168.83.157
	I0929 12:19:03.835366  410531 main.go:141] libmachine: (embed-certs-046125) reserving static IP address...
	I0929 12:19:03.835797  410531 main.go:141] libmachine: (embed-certs-046125) DBG | unable to find host DHCP lease matching {name: "embed-certs-046125", mac: "52:54:00:6c:2b:98", ip: "192.168.83.157"} in network mk-embed-certs-046125
	I0929 12:19:04.059434  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Getting to WaitForSSH function...
	I0929 12:19:04.059469  410531 main.go:141] libmachine: (embed-certs-046125) reserved static IP address 192.168.83.157 for domain embed-certs-046125
	I0929 12:19:04.059483  410531 main.go:141] libmachine: (embed-certs-046125) waiting for SSH...
	I0929 12:19:04.062609  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.063153  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.063193  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.063375  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Using SSH client type: external
	I0929 12:19:04.063510  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Using SSH private key: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa (-rw-------)
	I0929 12:19:04.063560  410531 main.go:141] libmachine: (embed-certs-046125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0929 12:19:04.063573  410531 main.go:141] libmachine: (embed-certs-046125) DBG | About to run SSH command:
	I0929 12:19:04.063585  410531 main.go:141] libmachine: (embed-certs-046125) DBG | exit 0
	I0929 12:19:04.199200  410531 main.go:141] libmachine: (embed-certs-046125) DBG | SSH cmd err, output: <nil>: 
	I0929 12:19:04.199545  410531 main.go:141] libmachine: (embed-certs-046125) domain creation complete
	I0929 12:19:04.200055  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetConfigRaw
	I0929 12:19:04.200747  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:04.201053  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:04.201312  410531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0929 12:19:04.201332  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetState
	I0929 12:19:04.202966  410531 main.go:141] libmachine: Detecting operating system of created instance...
	I0929 12:19:04.203012  410531 main.go:141] libmachine: Waiting for SSH to be available...
	I0929 12:19:04.203021  410531 main.go:141] libmachine: Getting to WaitForSSH function...
	I0929 12:19:04.203029  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:04.206292  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.206789  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.206824  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.207090  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:04.207367  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.207605  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.207791  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:04.208024  410531 main.go:141] libmachine: Using SSH client type: native
	I0929 12:19:04.208363  410531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0929 12:19:04.208381  410531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0929 12:19:04.323622  410531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:19:04.323650  410531 main.go:141] libmachine: Detecting the provisioner...
	I0929 12:19:04.323661  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:04.327298  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.327783  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.327818  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.328091  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:04.328297  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.328496  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.328636  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:04.328848  410531 main.go:141] libmachine: Using SSH client type: native
	I0929 12:19:04.329087  410531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0929 12:19:04.329101  410531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0929 12:19:04.447821  410531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0929 12:19:04.447939  410531 main.go:141] libmachine: found compatible host: buildroot
	I0929 12:19:04.447959  410531 main.go:141] libmachine: Provisioning with buildroot...
	I0929 12:19:04.447999  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetMachineName
	I0929 12:19:04.448316  410531 buildroot.go:166] provisioning hostname "embed-certs-046125"
	I0929 12:19:04.448352  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetMachineName
	I0929 12:19:04.448559  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:04.451760  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.452142  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.452169  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.452322  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:04.452537  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.452698  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.452829  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:04.453085  410531 main.go:141] libmachine: Using SSH client type: native
	I0929 12:19:04.453282  410531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0929 12:19:04.453294  410531 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-046125 && echo "embed-certs-046125" | sudo tee /etc/hostname
	I0929 12:19:04.593117  410531 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-046125
	
	I0929 12:19:04.593159  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:04.596442  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.596958  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.597005  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.597270  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:04.597466  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.597663  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.597817  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:04.598009  410531 main.go:141] libmachine: Using SSH client type: native
	I0929 12:19:04.598209  410531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0929 12:19:04.598231  410531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-046125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-046125/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-046125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:19:04.739491  410531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:19:04.739524  410531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21655-365455/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-365455/.minikube}
	I0929 12:19:04.739579  410531 buildroot.go:174] setting up certificates
	I0929 12:19:04.739600  410531 provision.go:84] configureAuth start
	I0929 12:19:04.739624  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetMachineName
	I0929 12:19:04.739947  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetIP
	I0929 12:19:04.743263  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.743659  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.743691  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.743853  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:04.746704  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.747137  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.747167  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.747376  410531 provision.go:143] copyHostCerts
	I0929 12:19:04.747443  410531 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem, removing ...
	I0929 12:19:04.747463  410531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem
	I0929 12:19:04.747543  410531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/ca.pem (1078 bytes)
	I0929 12:19:04.747654  410531 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem, removing ...
	I0929 12:19:04.747663  410531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem
	I0929 12:19:04.747692  410531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/cert.pem (1123 bytes)
	I0929 12:19:04.747780  410531 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem, removing ...
	I0929 12:19:04.747787  410531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem
	I0929 12:19:04.747811  410531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-365455/.minikube/key.pem (1675 bytes)
	I0929 12:19:04.747861  410531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem org=jenkins.embed-certs-046125 san=[127.0.0.1 192.168.83.157 embed-certs-046125 localhost minikube]
	I0929 12:19:04.909516  410531 provision.go:177] copyRemoteCerts
	I0929 12:19:04.909598  410531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:19:04.909625  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:04.913184  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.913577  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:04.913605  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:04.913840  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:04.914033  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:04.914235  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:04.914437  410531 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa Username:docker}
	I0929 12:19:05.003470  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 12:19:05.033291  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0929 12:19:05.063728  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 12:19:05.094039  410531 provision.go:87] duration metric: took 354.410966ms to configureAuth
	I0929 12:19:05.094068  410531 buildroot.go:189] setting minikube options for container-runtime
	I0929 12:19:05.094283  410531 config.go:182] Loaded profile config "embed-certs-046125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:19:05.094407  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:05.097666  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.098065  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.098097  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.098328  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:05.098545  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:05.098766  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:05.098961  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:05.099230  410531 main.go:141] libmachine: Using SSH client type: native
	I0929 12:19:05.099431  410531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0929 12:19:05.099447  410531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 12:19:05.350596  410531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 12:19:05.350619  410531 main.go:141] libmachine: Checking connection to Docker...
	I0929 12:19:05.350629  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetURL
	I0929 12:19:05.352059  410531 main.go:141] libmachine: (embed-certs-046125) DBG | using libvirt version 8000000
	I0929 12:19:05.354632  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.355132  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.355169  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.355330  410531 main.go:141] libmachine: Docker is up and running!
	I0929 12:19:05.355348  410531 main.go:141] libmachine: Reticulating splines...
	I0929 12:19:05.355355  410531 client.go:171] duration metric: took 18.210418885s to LocalClient.Create
	I0929 12:19:05.355400  410531 start.go:167] duration metric: took 18.210513608s to libmachine.API.Create "embed-certs-046125"
	I0929 12:19:05.355416  410531 start.go:293] postStartSetup for "embed-certs-046125" (driver="kvm2")
	I0929 12:19:05.355429  410531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:19:05.355454  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:05.355748  410531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:19:05.355776  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:05.358280  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.358674  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.358709  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.358856  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:05.359033  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:05.359202  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:05.359337  410531 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa Username:docker}
	I0929 12:19:05.450959  410531 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:19:05.457034  410531 info.go:137] Remote host: Buildroot 2025.02
	I0929 12:19:05.457063  410531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/addons for local assets ...
	I0929 12:19:05.457122  410531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-365455/.minikube/files for local assets ...
	I0929 12:19:05.457198  410531 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem -> 3694232.pem in /etc/ssl/certs
	I0929 12:19:05.457294  410531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:19:05.469676  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem --> /etc/ssl/certs/3694232.pem (1708 bytes)
	I0929 12:19:05.502646  410531 start.go:296] duration metric: took 147.209145ms for postStartSetup
	I0929 12:19:05.502731  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetConfigRaw
	I0929 12:19:05.503446  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetIP
	I0929 12:19:05.507170  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.507727  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.507756  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.508133  410531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/config.json ...
	I0929 12:19:05.508346  410531 start.go:128] duration metric: took 18.380900879s to createHost
	I0929 12:19:05.508374  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:05.511030  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.511399  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.511425  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.511578  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:05.511806  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:05.512007  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:05.512148  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:05.512349  410531 main.go:141] libmachine: Using SSH client type: native
	I0929 12:19:05.512588  410531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0929 12:19:05.512602  410531 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0929 12:19:05.629999  410531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759148345.588127664
	
	I0929 12:19:05.630025  410531 fix.go:216] guest clock: 1759148345.588127664
	I0929 12:19:05.630036  410531 fix.go:229] Guest: 2025-09-29 12:19:05.588127664 +0000 UTC Remote: 2025-09-29 12:19:05.508360451 +0000 UTC m=+18.513187547 (delta=79.767213ms)
	I0929 12:19:05.630064  410531 fix.go:200] guest clock delta is within tolerance: 79.767213ms
	I0929 12:19:05.630071  410531 start.go:83] releasing machines lock for "embed-certs-046125", held for 18.502729935s
	I0929 12:19:05.630095  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:05.630428  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetIP
	I0929 12:19:05.633718  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.634140  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.634182  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.634363  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:05.634904  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:05.635125  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:05.635238  410531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:19:05.635290  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:05.635345  410531 ssh_runner.go:195] Run: cat /version.json
	I0929 12:19:05.635373  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:05.638859  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.638968  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.639420  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.639450  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.639477  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:05.639498  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:05.639705  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:05.639919  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:05.639946  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:05.640135  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:05.640143  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:05.640321  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:05.640343  410531 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa Username:docker}
	I0929 12:19:05.640471  410531 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa Username:docker}
	I0929 12:19:05.754902  410531 ssh_runner.go:195] Run: systemctl --version
	I0929 12:19:05.762730  410531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 12:19:05.925056  410531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0929 12:19:05.931850  410531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0929 12:19:05.931923  410531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:19:05.951108  410531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0929 12:19:05.951135  410531 start.go:495] detecting cgroup driver to use...
	I0929 12:19:05.951215  410531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:19:05.970437  410531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:19:05.987143  410531 docker.go:218] disabling cri-docker service (if available) ...
	I0929 12:19:05.987215  410531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 12:19:06.005146  410531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 12:19:06.023892  410531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 12:19:06.182330  410531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 12:19:06.414633  410531 docker.go:234] disabling docker service ...
	I0929 12:19:06.414747  410531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 12:19:06.433260  410531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 12:19:06.449292  410531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 12:19:06.607419  410531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 12:19:06.778087  410531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:19:06.799065  410531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:19:06.823927  410531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 12:19:06.824031  410531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:19:06.836525  410531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0929 12:19:06.836607  410531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:19:06.849100  410531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:19:06.861213  410531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:19:06.874030  410531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:19:06.887586  410531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:19:06.901956  410531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:19:06.922788  410531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 12:19:06.937232  410531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:19:06.948799  410531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 12:19:06.948856  410531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 12:19:06.970720  410531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:19:06.983788  410531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:19:07.142757  410531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 12:19:07.267694  410531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 12:19:07.267761  410531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 12:19:07.273203  410531 start.go:563] Will wait 60s for crictl version
	I0929 12:19:07.273282  410531 ssh_runner.go:195] Run: which crictl
	I0929 12:19:07.277577  410531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:19:07.322992  410531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0929 12:19:07.323087  410531 ssh_runner.go:195] Run: crio --version
	I0929 12:19:07.357815  410531 ssh_runner.go:195] Run: crio --version
	I0929 12:19:07.396449  410531 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	W0929 12:19:03.876475  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:06.374715  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:08.377050  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:19:04.120129  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:04.620217  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:05.120777  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:05.620431  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:06.120060  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:06.620033  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:07.120251  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:07.620033  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:08.120501  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:08.620138  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:07.397468  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetIP
	I0929 12:19:07.400943  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:07.401372  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:07.401400  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:07.401737  410531 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0929 12:19:07.406737  410531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:19:07.422245  410531 kubeadm.go:875] updating cluster {Name:embed-certs-046125 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-
certs-046125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:19:07.422371  410531 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 12:19:07.422439  410531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:19:07.458111  410531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0929 12:19:07.458224  410531 ssh_runner.go:195] Run: which lz4
	I0929 12:19:07.463185  410531 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0929 12:19:07.467985  410531 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0929 12:19:07.468041  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0929 12:19:08.990567  410531 crio.go:462] duration metric: took 1.527428439s to copy over tarball
	I0929 12:19:08.990701  410531 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0929 12:19:10.730630  410531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.73988608s)
	I0929 12:19:10.730687  410531 crio.go:469] duration metric: took 1.74007922s to extract the tarball
	I0929 12:19:10.730698  410531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0929 12:19:10.777033  410531 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 12:19:10.821703  410531 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 12:19:10.821733  410531 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:19:10.821745  410531 kubeadm.go:926] updating node { 192.168.83.157 8443 v1.34.0 crio true true} ...
	I0929 12:19:10.821882  410531 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-046125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-046125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:19:10.821997  410531 ssh_runner.go:195] Run: crio config
	I0929 12:19:10.871249  410531 cni.go:84] Creating CNI manager for ""
	I0929 12:19:10.871285  410531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:19:10.871303  410531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:19:10.871337  410531 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.157 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-046125 NodeName:embed-certs-046125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:19:10.871569  410531 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-046125"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.157"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.157"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:19:10.871651  410531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:19:10.887173  410531 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:19:10.887252  410531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:19:10.899339  410531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0929 12:19:10.922507  410531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:19:10.945610  410531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0929 12:19:10.969410  410531 ssh_runner.go:195] Run: grep 192.168.83.157	control-plane.minikube.internal$ /etc/hosts
	I0929 12:19:10.973913  410531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:19:10.988651  410531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:19:11.145928  410531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:19:11.187005  410531 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125 for IP: 192.168.83.157
	I0929 12:19:11.187033  410531 certs.go:194] generating shared ca certs ...
	I0929 12:19:11.187057  410531 certs.go:226] acquiring lock for ca certs: {Name:mk0b410c7c5424a4463d6cf6464227ce4eef65e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:11.187250  410531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key
	I0929 12:19:11.187324  410531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key
	I0929 12:19:11.187340  410531 certs.go:256] generating profile certs ...
	I0929 12:19:11.187416  410531 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/client.key
	I0929 12:19:11.187438  410531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/client.crt with IP's: []
	I0929 12:19:11.416346  410531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/client.crt ...
	I0929 12:19:11.416378  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/client.crt: {Name:mk7c4737cea964f5131a6a31898ffc0e0f34d8cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:11.416570  410531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/client.key ...
	I0929 12:19:11.416582  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/client.key: {Name:mk5a826804434a8c9edcc258f3cdeba6e2ea9980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:11.416668  410531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.key.77d384dc
	I0929 12:19:11.416682  410531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.crt.77d384dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.157]
	I0929 12:19:11.673913  410531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.crt.77d384dc ...
	I0929 12:19:11.673945  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.crt.77d384dc: {Name:mk4f284403da1b93a6f148d1170e586a9900354e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:11.674149  410531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.key.77d384dc ...
	I0929 12:19:11.674168  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.key.77d384dc: {Name:mkf93861ae96e2465fc83caa1f10105dd4d0c3db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:11.674281  410531 certs.go:381] copying /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.crt.77d384dc -> /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.crt
	I0929 12:19:11.674361  410531 certs.go:385] copying /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.key.77d384dc -> /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.key
	I0929 12:19:11.674415  410531 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.key
	I0929 12:19:11.674430  410531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.crt with IP's: []
	I0929 12:19:11.903574  410531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.crt ...
	I0929 12:19:11.903607  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.crt: {Name:mkb17490b270d6f751452e0afe88e46822f1c03b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:11.903779  410531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.key ...
	I0929 12:19:11.903794  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.key: {Name:mk11d7018dedf057c66e1f7e2ddafcfeb5fc68c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:11.904059  410531 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem (1338 bytes)
	W0929 12:19:11.904128  410531 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423_empty.pem, impossibly tiny 0 bytes
	I0929 12:19:11.904143  410531 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:19:11.904166  410531 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/ca.pem (1078 bytes)
	I0929 12:19:11.904190  410531 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:19:11.904216  410531 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/certs/key.pem (1675 bytes)
	I0929 12:19:11.904279  410531 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem (1708 bytes)
	I0929 12:19:11.905096  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:19:11.952578  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 12:19:11.996472  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:19:12.029085  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 12:19:12.062426  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0929 12:19:12.091310  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:19:12.122406  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:19:12.154474  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/embed-certs-046125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:19:12.192246  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:19:12.229026  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/certs/369423.pem --> /usr/share/ca-certificates/369423.pem (1338 bytes)
	I0929 12:19:12.260095  410531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/ssl/certs/3694232.pem --> /usr/share/ca-certificates/3694232.pem (1708 bytes)
	I0929 12:19:12.290787  410531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:19:12.312020  410531 ssh_runner.go:195] Run: openssl version
	I0929 12:19:12.318739  410531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3694232.pem && ln -fs /usr/share/ca-certificates/3694232.pem /etc/ssl/certs/3694232.pem"
	I0929 12:19:12.333008  410531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3694232.pem
	I0929 12:19:12.338122  410531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:26 /usr/share/ca-certificates/3694232.pem
	I0929 12:19:12.338192  410531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3694232.pem
	I0929 12:19:12.345942  410531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3694232.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:19:12.359472  410531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:19:12.374992  410531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:19:12.380666  410531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:16 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:19:12.380718  410531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:19:12.387901  410531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:19:12.402446  410531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/369423.pem && ln -fs /usr/share/ca-certificates/369423.pem /etc/ssl/certs/369423.pem"
	I0929 12:19:12.418019  410531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/369423.pem
	I0929 12:19:12.423193  410531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:26 /usr/share/ca-certificates/369423.pem
	I0929 12:19:12.423266  410531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/369423.pem
	I0929 12:19:12.431531  410531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/369423.pem /etc/ssl/certs/51391683.0"
	I0929 12:19:12.444707  410531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:19:12.449796  410531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 12:19:12.449868  410531 kubeadm.go:392] StartCluster: {Name:embed-certs-046125 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-cer
ts-046125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:19:12.449968  410531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 12:19:12.450058  410531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 12:19:12.490349  410531 cri.go:89] found id: ""
	I0929 12:19:12.490464  410531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:19:12.503645  410531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:19:12.516355  410531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:19:12.528600  410531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 12:19:12.528623  410531 kubeadm.go:157] found existing configuration files:
	
	I0929 12:19:12.528673  410531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 12:19:12.541390  410531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 12:19:12.541469  410531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 12:19:12.554526  410531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 12:19:12.567795  410531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 12:19:12.567874  410531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:19:12.581770  410531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 12:19:12.593068  410531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 12:19:12.593140  410531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:19:12.608656  410531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 12:19:12.621756  410531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 12:19:12.621836  410531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:19:12.641888  410531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0929 12:19:12.717516  410531 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 12:19:12.717607  410531 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 12:19:12.831585  410531 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 12:19:12.831729  410531 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 12:19:12.831838  410531 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 12:19:12.844716  410531 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0929 12:19:10.377238  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:12.670407  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:19:09.120888  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:09.619990  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:10.120737  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:10.620502  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:11.120943  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:11.620860  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:12.120828  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:12.620138  409892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:13.801403  409892 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.181219164s)
	I0929 12:19:13.801452  409892 kubeadm.go:1105] duration metric: took 13.872688685s to wait for elevateKubeSystemPrivileges
	I0929 12:19:13.801477  409892 kubeadm.go:394] duration metric: took 25.539664453s to StartCluster
	I0929 12:19:13.801502  409892 settings.go:142] acquiring lock: {Name:mk1143e9344364f35458338f5354c9162487b91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:13.801606  409892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:19:13.803441  409892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/kubeconfig: {Name:mkd302531ec3362506563544f43831c9980ac365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:13.908922  409892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 12:19:13.909017  409892 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.163 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:19:13.909112  409892 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:19:13.909237  409892 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-832485"
	I0929 12:19:13.909264  409892 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-832485"
	I0929 12:19:13.909267  409892 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-832485"
	I0929 12:19:13.909305  409892 host.go:66] Checking if "old-k8s-version-832485" exists ...
	I0929 12:19:13.909309  409892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-832485"
	I0929 12:19:13.909278  409892 config.go:182] Loaded profile config "old-k8s-version-832485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0929 12:19:13.909828  409892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:13.909861  409892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:13.909868  409892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:13.909901  409892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:13.924706  409892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0929 12:19:13.924706  409892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33257
	I0929 12:19:13.925317  409892 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:13.925370  409892 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:13.925896  409892 main.go:141] libmachine: Using API Version  1
	I0929 12:19:13.925930  409892 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:13.926011  409892 main.go:141] libmachine: Using API Version  1
	I0929 12:19:13.926031  409892 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:13.926344  409892 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:13.926389  409892 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:13.926564  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetState
	I0929 12:19:13.926913  409892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:13.926960  409892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:13.942056  409892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37347
	I0929 12:19:13.942620  409892 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:13.943145  409892 main.go:141] libmachine: Using API Version  1
	I0929 12:19:13.943179  409892 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:13.943549  409892 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:13.943764  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetState
	I0929 12:19:13.946161  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .DriverName
	I0929 12:19:13.966541  409892 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-832485"
	I0929 12:19:13.966600  409892 host.go:66] Checking if "old-k8s-version-832485" exists ...
	I0929 12:19:13.967035  409892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:13.967092  409892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:13.972209  409892 out.go:179] * Verifying Kubernetes components...
	I0929 12:19:13.981840  409892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0929 12:19:14.003833  409892 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:19:14.004278  409892 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:14.004892  409892 main.go:141] libmachine: Using API Version  1
	I0929 12:19:14.004921  409892 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:14.005383  409892 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:14.006000  409892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:14.006052  409892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:14.006086  409892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:19:14.006095  409892 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:19:14.006113  409892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:19:14.006139  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHHostname
	I0929 12:19:14.010758  409892 main.go:141] libmachine: (old-k8s-version-832485) DBG | domain old-k8s-version-832485 has defined MAC address 52:54:00:ad:60:42 in network mk-old-k8s-version-832485
	I0929 12:19:14.011388  409892 main.go:141] libmachine: (old-k8s-version-832485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:60:42", ip: ""} in network mk-old-k8s-version-832485: {Iface:virbr4 ExpiryTime:2025-09-29 13:18:37 +0000 UTC Type:0 Mac:52:54:00:ad:60:42 Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:old-k8s-version-832485 Clientid:01:52:54:00:ad:60:42}
	I0929 12:19:14.011423  409892 main.go:141] libmachine: (old-k8s-version-832485) DBG | domain old-k8s-version-832485 has defined IP address 192.168.61.163 and MAC address 52:54:00:ad:60:42 in network mk-old-k8s-version-832485
	I0929 12:19:14.011720  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHPort
	I0929 12:19:14.011924  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHKeyPath
	I0929 12:19:14.012143  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHUsername
	I0929 12:19:14.012335  409892 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/old-k8s-version-832485/id_rsa Username:docker}
	I0929 12:19:14.026125  409892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0929 12:19:14.026886  409892 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:14.027479  409892 main.go:141] libmachine: Using API Version  1
	I0929 12:19:14.027507  409892 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:14.028033  409892 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:14.028349  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetState
	I0929 12:19:14.031077  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .DriverName
	I0929 12:19:14.031602  409892 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:19:14.031623  409892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:19:14.031660  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHHostname
	I0929 12:19:14.035753  409892 main.go:141] libmachine: (old-k8s-version-832485) DBG | domain old-k8s-version-832485 has defined MAC address 52:54:00:ad:60:42 in network mk-old-k8s-version-832485
	I0929 12:19:14.036407  409892 main.go:141] libmachine: (old-k8s-version-832485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:60:42", ip: ""} in network mk-old-k8s-version-832485: {Iface:virbr4 ExpiryTime:2025-09-29 13:18:37 +0000 UTC Type:0 Mac:52:54:00:ad:60:42 Iaid: IPaddr:192.168.61.163 Prefix:24 Hostname:old-k8s-version-832485 Clientid:01:52:54:00:ad:60:42}
	I0929 12:19:14.036457  409892 main.go:141] libmachine: (old-k8s-version-832485) DBG | domain old-k8s-version-832485 has defined IP address 192.168.61.163 and MAC address 52:54:00:ad:60:42 in network mk-old-k8s-version-832485
	I0929 12:19:14.036812  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHPort
	I0929 12:19:14.037087  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHKeyPath
	I0929 12:19:14.037374  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .GetSSHUsername
	I0929 12:19:14.037575  409892 sshutil.go:53] new ssh client: &{IP:192.168.61.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/old-k8s-version-832485/id_rsa Username:docker}
	I0929 12:19:14.039504  409892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 12:19:14.303570  409892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:19:14.554233  409892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:19:14.589955  409892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:19:16.142947  409892 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.103401833s)
	I0929 12:19:16.142997  409892 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0929 12:19:16.143020  409892 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.839410804s)
	I0929 12:19:16.144290  409892 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-832485" to be "Ready" ...
	I0929 12:19:16.174842  409892 node_ready.go:49] node "old-k8s-version-832485" is "Ready"
	I0929 12:19:16.174889  409892 node_ready.go:38] duration metric: took 30.573334ms for node "old-k8s-version-832485" to be "Ready" ...
	I0929 12:19:16.174909  409892 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:19:16.174995  409892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:19:16.396471  409892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.806444714s)
	I0929 12:19:16.396510  409892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.842233868s)
	I0929 12:19:16.396539  409892 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:16.396546  409892 api_server.go:72] duration metric: took 2.487483038s to wait for apiserver process to appear ...
	I0929 12:19:16.396566  409892 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:19:16.396577  409892 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:16.396589  409892 api_server.go:253] Checking apiserver healthz at https://192.168.61.163:8443/healthz ...
	I0929 12:19:16.396599  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .Close
	I0929 12:19:16.396552  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .Close
	I0929 12:19:16.397071  409892 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:16.397094  409892 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:16.397104  409892 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:16.397101  409892 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:16.397113  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .Close
	I0929 12:19:16.397119  409892 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:16.397131  409892 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:16.397169  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .Close
	I0929 12:19:16.397386  409892 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:16.397402  409892 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:16.398022  409892 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:16.398043  409892 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:16.405212  409892 api_server.go:279] https://192.168.61.163:8443/healthz returned 200:
	ok
	I0929 12:19:16.406855  409892 api_server.go:141] control plane version: v1.28.0
	I0929 12:19:16.406886  409892 api_server.go:131] duration metric: took 10.311695ms to wait for apiserver health ...
	I0929 12:19:16.406897  409892 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:19:16.417697  409892 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:16.417722  409892 main.go:141] libmachine: (old-k8s-version-832485) Calling .Close
	I0929 12:19:16.418018  409892 main.go:141] libmachine: (old-k8s-version-832485) DBG | Closing plugin on server side
	I0929 12:19:16.418026  409892 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:16.418039  409892 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:16.419778  409892 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 12:19:13.088158  410531 out.go:252]   - Generating certificates and keys ...
	I0929 12:19:13.088303  410531 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 12:19:13.088383  410531 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 12:19:13.629107  410531 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 12:19:14.086211  410531 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 12:19:14.123731  410531 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 12:19:14.387532  410531 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 12:19:14.647575  410531 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 12:19:14.647786  410531 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-046125 localhost] and IPs [192.168.83.157 127.0.0.1 ::1]
	I0929 12:19:14.833127  410531 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 12:19:14.833474  410531 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-046125 localhost] and IPs [192.168.83.157 127.0.0.1 ::1]
	I0929 12:19:14.927788  410531 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 12:19:15.057438  410531 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 12:19:15.259079  410531 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 12:19:15.259338  410531 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 12:19:15.738001  410531 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 12:19:16.171859  410531 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 12:19:16.363933  410531 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 12:19:16.546414  410531 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 12:19:17.096047  410531 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 12:19:17.096440  410531 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 12:19:17.098622  410531 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0929 12:19:14.876919  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:17.376751  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:19:16.420536  409892 system_pods.go:59] 8 kube-system pods found
	I0929 12:19:16.420594  409892 system_pods.go:61] "coredns-5dd5756b68-kschf" [9265dda3-e7a6-4c95-abdb-a27e6783eecb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:16.420607  409892 system_pods.go:61] "coredns-5dd5756b68-zhmhx" [f3dfa501-10c6-40c0-9671-18500b34984c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:16.420619  409892 system_pods.go:61] "etcd-old-k8s-version-832485" [1d804bf0-6b1c-428b-af38-370e3c3df91c] Running
	I0929 12:19:16.420634  409892 system_pods.go:61] "kube-apiserver-old-k8s-version-832485" [df444041-46e8-4526-838f-815e91908d56] Running
	I0929 12:19:16.420642  409892 system_pods.go:61] "kube-controller-manager-old-k8s-version-832485" [f8781b18-c88e-4c28-b10e-ccd7f495edb0] Running
	I0929 12:19:16.420648  409892 system_pods.go:61] "kube-proxy-6kv4t" [441ab5f1-114f-4876-b620-36180affd2a3] Running
	I0929 12:19:16.420657  409892 system_pods.go:61] "kube-scheduler-old-k8s-version-832485" [7a416575-908f-4dee-a2a5-8756e2cfec5f] Running
	I0929 12:19:16.420661  409892 system_pods.go:61] "storage-provisioner" [1ee0fb5c-ff63-43cc-87a9-40d6d1eb50e4] Pending
	I0929 12:19:16.420677  409892 system_pods.go:74] duration metric: took 13.77189ms to wait for pod list to return data ...
	I0929 12:19:16.420687  409892 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:19:16.421280  409892 addons.go:514] duration metric: took 2.512183744s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 12:19:16.427181  409892 default_sa.go:45] found service account: "default"
	I0929 12:19:16.427209  409892 default_sa.go:55] duration metric: took 6.51316ms for default service account to be created ...
	I0929 12:19:16.427221  409892 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:19:16.437925  409892 system_pods.go:86] 8 kube-system pods found
	I0929 12:19:16.437968  409892 system_pods.go:89] "coredns-5dd5756b68-kschf" [9265dda3-e7a6-4c95-abdb-a27e6783eecb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:16.437993  409892 system_pods.go:89] "coredns-5dd5756b68-zhmhx" [f3dfa501-10c6-40c0-9671-18500b34984c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:16.438025  409892 system_pods.go:89] "etcd-old-k8s-version-832485" [1d804bf0-6b1c-428b-af38-370e3c3df91c] Running
	I0929 12:19:16.438037  409892 system_pods.go:89] "kube-apiserver-old-k8s-version-832485" [df444041-46e8-4526-838f-815e91908d56] Running
	I0929 12:19:16.438043  409892 system_pods.go:89] "kube-controller-manager-old-k8s-version-832485" [f8781b18-c88e-4c28-b10e-ccd7f495edb0] Running
	I0929 12:19:16.438049  409892 system_pods.go:89] "kube-proxy-6kv4t" [441ab5f1-114f-4876-b620-36180affd2a3] Running
	I0929 12:19:16.438054  409892 system_pods.go:89] "kube-scheduler-old-k8s-version-832485" [7a416575-908f-4dee-a2a5-8756e2cfec5f] Running
	I0929 12:19:16.438065  409892 system_pods.go:89] "storage-provisioner" [1ee0fb5c-ff63-43cc-87a9-40d6d1eb50e4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:19:16.438078  409892 system_pods.go:126] duration metric: took 10.849324ms to wait for k8s-apps to be running ...
	I0929 12:19:16.438093  409892 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:19:16.438156  409892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:19:16.460805  409892 system_svc.go:56] duration metric: took 22.699894ms WaitForService to wait for kubelet
	I0929 12:19:16.460842  409892 kubeadm.go:578] duration metric: took 2.55177514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:19:16.460862  409892 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:19:16.463746  409892 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 12:19:16.463773  409892 node_conditions.go:123] node cpu capacity is 2
	I0929 12:19:16.463785  409892 node_conditions.go:105] duration metric: took 2.918869ms to run NodePressure ...
	I0929 12:19:16.463797  409892 start.go:241] waiting for startup goroutines ...
	I0929 12:19:16.648546  409892 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-832485" context rescaled to 1 replicas
	I0929 12:19:16.648583  409892 start.go:246] waiting for cluster config update ...
	I0929 12:19:16.648595  409892 start.go:255] writing updated cluster config ...
	I0929 12:19:16.648948  409892 ssh_runner.go:195] Run: rm -f paused
	I0929 12:19:16.654464  409892 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:19:16.659480  409892 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-kschf" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:19:18.666498  409892 pod_ready.go:104] pod "coredns-5dd5756b68-kschf" is not "Ready", error: <nil>
	I0929 12:19:17.100291  410531 out.go:252]   - Booting up control plane ...
	I0929 12:19:17.100437  410531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 12:19:17.100581  410531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 12:19:17.100678  410531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 12:19:17.119296  410531 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 12:19:17.119516  410531 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 12:19:17.131081  410531 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 12:19:17.131227  410531 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 12:19:17.131308  410531 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 12:19:17.329466  410531 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 12:19:17.329655  410531 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 12:19:17.830691  410531 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.679623ms
	I0929 12:19:17.835298  410531 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 12:19:17.835481  410531 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.83.157:8443/livez
	I0929 12:19:17.835632  410531 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 12:19:17.835751  410531 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 12:19:20.268655  410531 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.43450063s
	I0929 12:19:21.496660  410531 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.664242437s
	I0929 12:19:23.333481  410531 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.502040087s
	I0929 12:19:23.349405  410531 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 12:19:23.371192  410531 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 12:19:23.396542  410531 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 12:19:23.396861  410531 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-046125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 12:19:23.414918  410531 kubeadm.go:310] [bootstrap-token] Using token: d4mkp2.32ql4jaq65mb76yz
	W0929 12:19:19.875480  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:21.875915  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:20.667311  409892 pod_ready.go:104] pod "coredns-5dd5756b68-kschf" is not "Ready", error: <nil>
	W0929 12:19:23.166415  409892 pod_ready.go:104] pod "coredns-5dd5756b68-kschf" is not "Ready", error: <nil>
	I0929 12:19:23.416253  410531 out.go:252]   - Configuring RBAC rules ...
	I0929 12:19:23.416402  410531 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 12:19:23.425194  410531 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 12:19:23.434203  410531 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 12:19:23.438741  410531 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 12:19:23.447531  410531 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 12:19:23.451967  410531 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 12:19:23.743956  410531 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 12:19:24.215133  410531 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 12:19:24.739028  410531 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 12:19:24.740125  410531 kubeadm.go:310] 
	I0929 12:19:24.740195  410531 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 12:19:24.740227  410531 kubeadm.go:310] 
	I0929 12:19:24.740329  410531 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 12:19:24.740340  410531 kubeadm.go:310] 
	I0929 12:19:24.740396  410531 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 12:19:24.740496  410531 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 12:19:24.740579  410531 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 12:19:24.740589  410531 kubeadm.go:310] 
	I0929 12:19:24.740673  410531 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 12:19:24.740688  410531 kubeadm.go:310] 
	I0929 12:19:24.740772  410531 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 12:19:24.740783  410531 kubeadm.go:310] 
	I0929 12:19:24.740865  410531 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 12:19:24.741030  410531 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 12:19:24.741103  410531 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 12:19:24.741110  410531 kubeadm.go:310] 
	I0929 12:19:24.741181  410531 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 12:19:24.741253  410531 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 12:19:24.741259  410531 kubeadm.go:310] 
	I0929 12:19:24.741354  410531 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d4mkp2.32ql4jaq65mb76yz \
	I0929 12:19:24.741485  410531 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6584cfb39d6d521de94c50ba68c73bacf142e1b11809c32d2bb4689966c9f242 \
	I0929 12:19:24.741512  410531 kubeadm.go:310] 	--control-plane 
	I0929 12:19:24.741529  410531 kubeadm.go:310] 
	I0929 12:19:24.741657  410531 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 12:19:24.741685  410531 kubeadm.go:310] 
	I0929 12:19:24.741811  410531 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d4mkp2.32ql4jaq65mb76yz \
	I0929 12:19:24.741984  410531 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6584cfb39d6d521de94c50ba68c73bacf142e1b11809c32d2bb4689966c9f242 
	I0929 12:19:24.743405  410531 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 12:19:24.743442  410531 cni.go:84] Creating CNI manager for ""
	I0929 12:19:24.743453  410531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 12:19:24.745982  410531 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 12:19:24.747210  410531 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 12:19:24.762285  410531 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 12:19:24.788118  410531 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:19:24.788274  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:24.788294  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-046125 minikube.k8s.io/updated_at=2025_09_29T12_19_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf minikube.k8s.io/name=embed-certs-046125 minikube.k8s.io/primary=true
	I0929 12:19:24.846577  410531 ops.go:34] apiserver oom_adj: -16
	I0929 12:19:24.963799  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:25.464114  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:25.964818  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:26.464121  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:26.964663  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0929 12:19:23.876065  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:26.375473  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:28.376466  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:25.676440  409892 pod_ready.go:104] pod "coredns-5dd5756b68-kschf" is not "Ready", error: <nil>
	I0929 12:19:26.162860  409892 pod_ready.go:99] pod "coredns-5dd5756b68-kschf" in "kube-system" namespace is gone: getting pod "coredns-5dd5756b68-kschf" in "kube-system" namespace (will retry): pods "coredns-5dd5756b68-kschf" not found
	I0929 12:19:26.162888  409892 pod_ready.go:86] duration metric: took 9.50337884s for pod "coredns-5dd5756b68-kschf" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:26.162898  409892 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-zhmhx" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:19:28.169700  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	I0929 12:19:27.464645  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:27.963952  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:28.464241  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:28.964144  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:29.464083  410531 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:19:29.575144  410531 kubeadm.go:1105] duration metric: took 4.786935257s to wait for elevateKubeSystemPrivileges
	I0929 12:19:29.575204  410531 kubeadm.go:394] duration metric: took 17.125341965s to StartCluster
	I0929 12:19:29.575235  410531 settings.go:142] acquiring lock: {Name:mk1143e9344364f35458338f5354c9162487b91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:29.575338  410531 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:19:29.577548  410531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-365455/kubeconfig: {Name:mkd302531ec3362506563544f43831c9980ac365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:19:29.577810  410531 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 12:19:29.577862  410531 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:19:29.577825  410531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 12:19:29.577997  410531 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-046125"
	I0929 12:19:29.578025  410531 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-046125"
	I0929 12:19:29.578061  410531 addons.go:69] Setting default-storageclass=true in profile "embed-certs-046125"
	I0929 12:19:29.578091  410531 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-046125"
	I0929 12:19:29.578069  410531 config.go:182] Loaded profile config "embed-certs-046125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:19:29.578071  410531 host.go:66] Checking if "embed-certs-046125" exists ...
	I0929 12:19:29.578595  410531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:29.578641  410531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:29.578649  410531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:29.578704  410531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:29.579455  410531 out.go:179] * Verifying Kubernetes components...
	I0929 12:19:29.580856  410531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:19:29.593986  410531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0929 12:19:29.594341  410531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0929 12:19:29.594783  410531 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:29.595199  410531 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:29.595355  410531 main.go:141] libmachine: Using API Version  1
	I0929 12:19:29.595378  410531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:29.595708  410531 main.go:141] libmachine: Using API Version  1
	I0929 12:19:29.595737  410531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:29.595940  410531 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:29.596161  410531 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:29.596177  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetState
	I0929 12:19:29.596751  410531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:29.596801  410531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:29.600598  410531 addons.go:238] Setting addon default-storageclass=true in "embed-certs-046125"
	I0929 12:19:29.600665  410531 host.go:66] Checking if "embed-certs-046125" exists ...
	I0929 12:19:29.601126  410531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:29.601185  410531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:29.613064  410531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0929 12:19:29.613638  410531 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:29.614282  410531 main.go:141] libmachine: Using API Version  1
	I0929 12:19:29.614313  410531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:29.614727  410531 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:29.614996  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetState
	I0929 12:19:29.615869  410531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
	I0929 12:19:29.616395  410531 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:29.616959  410531 main.go:141] libmachine: Using API Version  1
	I0929 12:19:29.617014  410531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:29.617480  410531 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:29.617521  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:29.618257  410531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:19:29.618321  410531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:19:29.619185  410531 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:19:29.620557  410531 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:19:29.620576  410531 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:19:29.620597  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:29.625010  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:29.625629  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:29.625708  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:29.625935  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:29.626342  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:29.626582  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:29.626778  410531 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa Username:docker}
	I0929 12:19:29.636057  410531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0929 12:19:29.636576  410531 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:19:29.637094  410531 main.go:141] libmachine: Using API Version  1
	I0929 12:19:29.637121  410531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:19:29.637520  410531 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:19:29.637724  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetState
	I0929 12:19:29.639918  410531 main.go:141] libmachine: (embed-certs-046125) Calling .DriverName
	I0929 12:19:29.640200  410531 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:19:29.640219  410531 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:19:29.640241  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHHostname
	I0929 12:19:29.643180  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:29.643731  410531 main.go:141] libmachine: (embed-certs-046125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:2b:98", ip: ""} in network mk-embed-certs-046125: {Iface:virbr3 ExpiryTime:2025-09-29 13:19:03 +0000 UTC Type:0 Mac:52:54:00:6c:2b:98 Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:embed-certs-046125 Clientid:01:52:54:00:6c:2b:98}
	I0929 12:19:29.643771  410531 main.go:141] libmachine: (embed-certs-046125) DBG | domain embed-certs-046125 has defined IP address 192.168.83.157 and MAC address 52:54:00:6c:2b:98 in network mk-embed-certs-046125
	I0929 12:19:29.643987  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHPort
	I0929 12:19:29.644221  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHKeyPath
	I0929 12:19:29.644370  410531 main.go:141] libmachine: (embed-certs-046125) Calling .GetSSHUsername
	I0929 12:19:29.644565  410531 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/embed-certs-046125/id_rsa Username:docker}
	I0929 12:19:29.907890  410531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 12:19:29.939238  410531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:19:30.224049  410531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:19:30.234493  410531 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:19:30.796273  410531 start.go:976] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0929 12:19:30.796523  410531 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:30.796550  410531 main.go:141] libmachine: (embed-certs-046125) Calling .Close
	I0929 12:19:30.796891  410531 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:30.796921  410531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:30.796932  410531 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:30.796941  410531 main.go:141] libmachine: (embed-certs-046125) Calling .Close
	I0929 12:19:30.796946  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Closing plugin on server side
	I0929 12:19:30.797191  410531 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:30.797205  410531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:30.797968  410531 node_ready.go:35] waiting up to 6m0s for node "embed-certs-046125" to be "Ready" ...
	I0929 12:19:30.835126  410531 node_ready.go:49] node "embed-certs-046125" is "Ready"
	I0929 12:19:30.835166  410531 node_ready.go:38] duration metric: took 37.155818ms for node "embed-certs-046125" to be "Ready" ...
	I0929 12:19:30.835185  410531 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:19:30.835253  410531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:19:30.839030  410531 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:30.839061  410531 main.go:141] libmachine: (embed-certs-046125) Calling .Close
	I0929 12:19:30.839384  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Closing plugin on server side
	I0929 12:19:30.839439  410531 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:30.839449  410531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:31.304797  410531 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-046125" context rescaled to 1 replicas
	I0929 12:19:31.336149  410531 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101614138s)
	I0929 12:19:31.336189  410531 api_server.go:72] duration metric: took 1.758339508s to wait for apiserver process to appear ...
	I0929 12:19:31.336201  410531 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:19:31.336222  410531 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:31.336224  410531 api_server.go:253] Checking apiserver healthz at https://192.168.83.157:8443/healthz ...
	I0929 12:19:31.336233  410531 main.go:141] libmachine: (embed-certs-046125) Calling .Close
	I0929 12:19:31.336633  410531 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:31.336653  410531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:31.336662  410531 main.go:141] libmachine: Making call to close driver server
	I0929 12:19:31.336634  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Closing plugin on server side
	I0929 12:19:31.336680  410531 main.go:141] libmachine: (embed-certs-046125) Calling .Close
	I0929 12:19:31.337011  410531 main.go:141] libmachine: Successfully made call to close driver server
	I0929 12:19:31.337025  410531 main.go:141] libmachine: (embed-certs-046125) DBG | Closing plugin on server side
	I0929 12:19:31.337035  410531 main.go:141] libmachine: Making call to close connection to plugin binary
	I0929 12:19:31.338521  410531 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 12:19:31.340076  410531 addons.go:514] duration metric: took 1.762227734s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 12:19:31.362561  410531 api_server.go:279] https://192.168.83.157:8443/healthz returned 200:
	ok
	I0929 12:19:31.366182  410531 api_server.go:141] control plane version: v1.34.0
	I0929 12:19:31.366217  410531 api_server.go:131] duration metric: took 30.008433ms to wait for apiserver health ...
	I0929 12:19:31.366226  410531 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:19:31.371880  410531 system_pods.go:59] 8 kube-system pods found
	I0929 12:19:31.371941  410531 system_pods.go:61] "coredns-66bc5c9577-dcgmq" [23985398-d3e3-48b6-b73d-d859ff586b0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:31.371954  410531 system_pods.go:61] "coredns-66bc5c9577-l45gc" [d970f835-3c9a-4dd6-b6e3-7888f887b928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:31.371968  410531 system_pods.go:61] "etcd-embed-certs-046125" [a1038ff5-f4fb-4a20-857a-7a5aa1dcec6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:19:31.372009  410531 system_pods.go:61] "kube-apiserver-embed-certs-046125" [bba261e5-3286-40b8-b2f3-f70131b5eca0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:19:31.372024  410531 system_pods.go:61] "kube-controller-manager-embed-certs-046125" [82934399-eb7a-46f2-a8e8-3fdfa523dde0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:19:31.372034  410531 system_pods.go:61] "kube-proxy-f9x8p" [ad033433-9fe1-4c13-bd46-9d053a625c90] Running
	I0929 12:19:31.372041  410531 system_pods.go:61] "kube-scheduler-embed-certs-046125" [602dba9b-12bc-4294-9c67-6c133dd63370] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:19:31.372049  410531 system_pods.go:61] "storage-provisioner" [8841fa29-2e47-4f41-8c4b-8a13b7ba431a] Pending
	I0929 12:19:31.372059  410531 system_pods.go:74] duration metric: took 5.826051ms to wait for pod list to return data ...
	I0929 12:19:31.372072  410531 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:19:31.383154  410531 default_sa.go:45] found service account: "default"
	I0929 12:19:31.383186  410531 default_sa.go:55] duration metric: took 11.10504ms for default service account to be created ...
	I0929 12:19:31.383198  410531 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:19:31.396341  410531 system_pods.go:86] 8 kube-system pods found
	I0929 12:19:31.396373  410531 system_pods.go:89] "coredns-66bc5c9577-dcgmq" [23985398-d3e3-48b6-b73d-d859ff586b0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:31.396381  410531 system_pods.go:89] "coredns-66bc5c9577-l45gc" [d970f835-3c9a-4dd6-b6e3-7888f887b928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:19:31.396389  410531 system_pods.go:89] "etcd-embed-certs-046125" [a1038ff5-f4fb-4a20-857a-7a5aa1dcec6e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:19:31.396396  410531 system_pods.go:89] "kube-apiserver-embed-certs-046125" [bba261e5-3286-40b8-b2f3-f70131b5eca0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:19:31.396401  410531 system_pods.go:89] "kube-controller-manager-embed-certs-046125" [82934399-eb7a-46f2-a8e8-3fdfa523dde0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:19:31.396405  410531 system_pods.go:89] "kube-proxy-f9x8p" [ad033433-9fe1-4c13-bd46-9d053a625c90] Running
	I0929 12:19:31.396410  410531 system_pods.go:89] "kube-scheduler-embed-certs-046125" [602dba9b-12bc-4294-9c67-6c133dd63370] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:19:31.396414  410531 system_pods.go:89] "storage-provisioner" [8841fa29-2e47-4f41-8c4b-8a13b7ba431a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:19:31.396421  410531 system_pods.go:126] duration metric: took 13.217631ms to wait for k8s-apps to be running ...
	I0929 12:19:31.396428  410531 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:19:31.396486  410531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:19:31.416843  410531 system_svc.go:56] duration metric: took 20.401728ms WaitForService to wait for kubelet
	I0929 12:19:31.416886  410531 kubeadm.go:578] duration metric: took 1.839036303s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:19:31.416913  410531 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:19:31.421191  410531 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0929 12:19:31.421233  410531 node_conditions.go:123] node cpu capacity is 2
	I0929 12:19:31.421252  410531 node_conditions.go:105] duration metric: took 4.332019ms to run NodePressure ...
	I0929 12:19:31.421266  410531 start.go:241] waiting for startup goroutines ...
	I0929 12:19:31.421273  410531 start.go:246] waiting for cluster config update ...
	I0929 12:19:31.421284  410531 start.go:255] writing updated cluster config ...
	I0929 12:19:31.421568  410531 ssh_runner.go:195] Run: rm -f paused
	I0929 12:19:31.429817  410531 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:19:31.435745  410531 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dcgmq" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:19:30.377938  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:32.877506  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:30.171526  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	W0929 12:19:32.669239  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	W0929 12:19:33.442237  410531 pod_ready.go:104] pod "coredns-66bc5c9577-dcgmq" is not "Ready", error: <nil>
	W0929 12:19:35.443686  410531 pod_ready.go:104] pod "coredns-66bc5c9577-dcgmq" is not "Ready", error: <nil>
	W0929 12:19:35.376314  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:37.875968  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:34.671604  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	W0929 12:19:37.169042  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	W0929 12:19:37.942629  410531 pod_ready.go:104] pod "coredns-66bc5c9577-dcgmq" is not "Ready", error: <nil>
	W0929 12:19:39.943639  410531 pod_ready.go:104] pod "coredns-66bc5c9577-dcgmq" is not "Ready", error: <nil>
	W0929 12:19:40.375856  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:42.876353  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:39.169282  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	W0929 12:19:41.169474  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	W0929 12:19:43.169734  409892 pod_ready.go:104] pod "coredns-5dd5756b68-zhmhx" is not "Ready", error: <nil>
	I0929 12:19:44.669349  409892 pod_ready.go:94] pod "coredns-5dd5756b68-zhmhx" is "Ready"
	I0929 12:19:44.669382  409892 pod_ready.go:86] duration metric: took 18.506477799s for pod "coredns-5dd5756b68-zhmhx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:44.672529  409892 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:44.677552  409892 pod_ready.go:94] pod "etcd-old-k8s-version-832485" is "Ready"
	I0929 12:19:44.677575  409892 pod_ready.go:86] duration metric: took 5.018292ms for pod "etcd-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:44.680504  409892 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:44.686392  409892 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-832485" is "Ready"
	I0929 12:19:44.686417  409892 pod_ready.go:86] duration metric: took 5.882084ms for pod "kube-apiserver-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:44.689167  409892 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:44.866859  409892 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-832485" is "Ready"
	I0929 12:19:44.866889  409892 pod_ready.go:86] duration metric: took 177.692981ms for pod "kube-controller-manager-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:45.068407  409892 pod_ready.go:83] waiting for pod "kube-proxy-6kv4t" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:45.467437  409892 pod_ready.go:94] pod "kube-proxy-6kv4t" is "Ready"
	I0929 12:19:45.467468  409892 pod_ready.go:86] duration metric: took 399.026898ms for pod "kube-proxy-6kv4t" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:45.668027  409892 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:46.067161  409892 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-832485" is "Ready"
	I0929 12:19:46.067196  409892 pod_ready.go:86] duration metric: took 399.129636ms for pod "kube-scheduler-old-k8s-version-832485" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:46.067212  409892 pod_ready.go:40] duration metric: took 29.412714399s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:19:46.114906  409892 start.go:623] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I0929 12:19:46.116364  409892 out.go:203] 
	W0929 12:19:46.117474  409892 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I0929 12:19:46.118584  409892 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0929 12:19:46.119830  409892 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-832485" cluster and "default" namespace by default
	I0929 12:19:42.439310  410531 pod_ready.go:99] pod "coredns-66bc5c9577-dcgmq" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-dcgmq" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-dcgmq" not found
	I0929 12:19:42.439342  410531 pod_ready.go:86] duration metric: took 11.003561846s for pod "coredns-66bc5c9577-dcgmq" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:19:42.439357  410531 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l45gc" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:19:44.445743  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:19:46.446675  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:19:44.876548  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:47.375399  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:48.945284  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:19:50.946316  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:19:49.377961  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:51.876442  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:53.446343  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:19:55.947023  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:19:54.375816  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:56.376998  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:58.377474  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:19:57.948098  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:20:00.445639  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:20:00.875289  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:02.875713  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:02.446593  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:20:04.947127  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:20:04.876140  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:07.374277  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:07.446673  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:20:09.945671  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:20:11.945822  410531 pod_ready.go:104] pod "coredns-66bc5c9577-l45gc" is not "Ready", error: <nil>
	W0929 12:20:09.377050  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:11.876591  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:20:12.449024  410531 pod_ready.go:94] pod "coredns-66bc5c9577-l45gc" is "Ready"
	I0929 12:20:12.449065  410531 pod_ready.go:86] duration metric: took 30.009699897s for pod "coredns-66bc5c9577-l45gc" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:12.452562  410531 pod_ready.go:83] waiting for pod "etcd-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:12.458853  410531 pod_ready.go:94] pod "etcd-embed-certs-046125" is "Ready"
	I0929 12:20:12.458883  410531 pod_ready.go:86] duration metric: took 6.294042ms for pod "etcd-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:12.461322  410531 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:12.466434  410531 pod_ready.go:94] pod "kube-apiserver-embed-certs-046125" is "Ready"
	I0929 12:20:12.466464  410531 pod_ready.go:86] duration metric: took 5.112163ms for pod "kube-apiserver-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:12.469124  410531 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:12.643705  410531 pod_ready.go:94] pod "kube-controller-manager-embed-certs-046125" is "Ready"
	I0929 12:20:12.643737  410531 pod_ready.go:86] duration metric: took 174.589841ms for pod "kube-controller-manager-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:12.844094  410531 pod_ready.go:83] waiting for pod "kube-proxy-f9x8p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:13.243590  410531 pod_ready.go:94] pod "kube-proxy-f9x8p" is "Ready"
	I0929 12:20:13.243620  410531 pod_ready.go:86] duration metric: took 399.497948ms for pod "kube-proxy-f9x8p" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:13.446718  410531 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:13.844104  410531 pod_ready.go:94] pod "kube-scheduler-embed-certs-046125" is "Ready"
	I0929 12:20:13.844139  410531 pod_ready.go:86] duration metric: took 397.392215ms for pod "kube-scheduler-embed-certs-046125" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:20:13.844153  410531 pod_ready.go:40] duration metric: took 42.414276944s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:20:13.890603  410531 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:20:13.892336  410531 out.go:179] * Done! kubectl is now configured to use "embed-certs-046125" cluster and "default" namespace by default
	W0929 12:20:14.376889  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:16.876851  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:19.376046  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:21.875056  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:23.875940  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:25.876271  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:28.376368  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:30.377998  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:32.876806  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:35.375170  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:37.875333  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:40.375003  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:42.375393  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:44.375812  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:46.376932  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:48.875432  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:50.875703  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:53.376126  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:55.875840  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:20:58.375809  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:21:00.376229  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:21:02.875914  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:21:05.375371  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	W0929 12:21:07.378101  405898 pod_ready.go:104] pod "kube-proxy-whtqx" is not "Ready", error: <nil>
	I0929 12:21:08.327491  405898 pod_ready.go:86] duration metric: took 3m48.958276589s for pod "kube-proxy-whtqx" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:21:08.327528  405898 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0929 12:21:08.327546  405898 pod_ready.go:40] duration metric: took 4m0.001144706s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:21:08.329189  405898 out.go:203] 
	W0929 12:21:08.330701  405898 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0929 12:21:08.332042  405898 out.go:203] 
	
	
	==> CRI-O <==
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.045327645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759148469045305526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68fb553d-83a4-4fe6-8e52-82c994604022 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.046051259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb0ea2e8-5da1-4b30-b5e7-c738933b0cb8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.046119977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb0ea2e8-5da1-4b30-b5e7-c738933b0cb8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.046359313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4a04acb0a54a2a6109e09a1d7b8a17583d9484af183e3fc6981d412e7ba59b1,PodSandboxId:35b1358a3a9cba63aa46b4eb80c88e83d9365d8e757da58ffffedb636bebfd54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759148226681280809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648888e4077d1078096b2ecb31c2c4c48da18dc50dad463405bc4620991ba4a,PodSandboxId:62547d9ad28628ca912501378960b868fa287c876c09caed5e895cb1954f14db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759148222395625949,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3dd5527fe19a0d44bd21bdc6d14bd3dfc0d1b2c66db5c654ea78b2d7be4872,PodSandboxId:10f7f00b1bebfb9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46
169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759148222129685627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5c86882efe9e1834cd208fe8a1e9eb458614093634096e76edb181c1df14a9,PodSandboxId:a68d0add50e5820e3f0c7d90f52d36a7af3076b653f3cf74b13d9e1ddfc1322d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1
308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759148222124745391,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397cac24c5ac7061b13d625683b63d7d76c23e5890a9504f70a685ec8975298e,PodSandboxId:8186f992b9e70bedfcc08d7e17bfca74849f286fe1f024ee36a8339a5d112022,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759148222094262842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c576caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92ea40987ef263104fc84da9b5fab382f0ca424dc09d428b06e20eb93c2447c,PodSandboxId:10f7f00b1bebf
b9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759148217202059417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:1f47960f0db4d842151528da6c7020d59f4d6bede1042da342f257bc15eaf437,PodSandboxId:ecf865a6a2c06042ef713c25e8e304fa52cc9c82cb6e8531a9b937877d5ddb23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759148124459942126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-p
robe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98a67edfdc65a81f26cbc0e77bfa203408717769b7fcfcad8430bdcfd69b548,PodSandboxId:94659dbfe6f3fee1b59a902b9fb496d33d1ae4fdf789d117c860d245c3508f59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759148123370745209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c57
6caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d57fcd84b137bc2cabf619630723bdacad6955827512a828ae2a79965a3466,PodSandboxId:f551896494f877f49534d4ac0a30bd1bcbe216a2adc24e5a3543e2b29f35dabe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759148123239682911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9ac69f6bf6aa520160c113d51efdb95c05d442b7ed06d627b337b0fe5f1eca,PodSandboxId:851671ab9c92004431b41b5ca1a28a8569bb28a1f6cdd4286bb9c5139e58c055,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:175914812314130962
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948b9271b3533c244b8b8078effff4d6dc750055e7998f71476db3ff7100e454,PodSandboxId:bd1208889672e5f8e257662b8024a6df8bf526064833e29b0ce8143a1c3d9c4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759148068168237661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whtqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 418d6401-682d-449d-b126-511492131712,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb0ea2e8-5da1-4b30-b5e7-c738933b0cb8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.091098106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf5b422a-0946-4f73-83fd-2760899c48db name=/runtime.v1.RuntimeService/Version
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.091420305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf5b422a-0946-4f73-83fd-2760899c48db name=/runtime.v1.RuntimeService/Version
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.092915725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eec29c98-afb8-4746-aaf5-aaebf1168141 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.093765202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759148469093742471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eec29c98-afb8-4746-aaf5-aaebf1168141 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.094634958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6ec0bfd-5ee0-4c18-8206-bbe7fac8ad33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.094704788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6ec0bfd-5ee0-4c18-8206-bbe7fac8ad33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.094964823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4a04acb0a54a2a6109e09a1d7b8a17583d9484af183e3fc6981d412e7ba59b1,PodSandboxId:35b1358a3a9cba63aa46b4eb80c88e83d9365d8e757da58ffffedb636bebfd54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759148226681280809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648888e4077d1078096b2ecb31c2c4c48da18dc50dad463405bc4620991ba4a,PodSandboxId:62547d9ad28628ca912501378960b868fa287c876c09caed5e895cb1954f14db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759148222395625949,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3dd5527fe19a0d44bd21bdc6d14bd3dfc0d1b2c66db5c654ea78b2d7be4872,PodSandboxId:10f7f00b1bebfb9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46
169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759148222129685627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5c86882efe9e1834cd208fe8a1e9eb458614093634096e76edb181c1df14a9,PodSandboxId:a68d0add50e5820e3f0c7d90f52d36a7af3076b653f3cf74b13d9e1ddfc1322d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1
308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759148222124745391,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397cac24c5ac7061b13d625683b63d7d76c23e5890a9504f70a685ec8975298e,PodSandboxId:8186f992b9e70bedfcc08d7e17bfca74849f286fe1f024ee36a8339a5d112022,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759148222094262842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c576caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92ea40987ef263104fc84da9b5fab382f0ca424dc09d428b06e20eb93c2447c,PodSandboxId:10f7f00b1bebf
b9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759148217202059417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:1f47960f0db4d842151528da6c7020d59f4d6bede1042da342f257bc15eaf437,PodSandboxId:ecf865a6a2c06042ef713c25e8e304fa52cc9c82cb6e8531a9b937877d5ddb23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759148124459942126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-p
robe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98a67edfdc65a81f26cbc0e77bfa203408717769b7fcfcad8430bdcfd69b548,PodSandboxId:94659dbfe6f3fee1b59a902b9fb496d33d1ae4fdf789d117c860d245c3508f59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759148123370745209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c57
6caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d57fcd84b137bc2cabf619630723bdacad6955827512a828ae2a79965a3466,PodSandboxId:f551896494f877f49534d4ac0a30bd1bcbe216a2adc24e5a3543e2b29f35dabe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759148123239682911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9ac69f6bf6aa520160c113d51efdb95c05d442b7ed06d627b337b0fe5f1eca,PodSandboxId:851671ab9c92004431b41b5ca1a28a8569bb28a1f6cdd4286bb9c5139e58c055,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:175914812314130962
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948b9271b3533c244b8b8078effff4d6dc750055e7998f71476db3ff7100e454,PodSandboxId:bd1208889672e5f8e257662b8024a6df8bf526064833e29b0ce8143a1c3d9c4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759148068168237661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whtqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 418d6401-682d-449d-b126-511492131712,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6ec0bfd-5ee0-4c18-8206-bbe7fac8ad33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.135318200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4493b61-8d0d-4405-8027-2d944130e7ba name=/runtime.v1.RuntimeService/Version
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.135404753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4493b61-8d0d-4405-8027-2d944130e7ba name=/runtime.v1.RuntimeService/Version
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.137262796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4684ad7e-990e-4115-98d8-4cb5c9a256a5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.137718414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759148469137689970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4684ad7e-990e-4115-98d8-4cb5c9a256a5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.138851595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b9e17f6-4dd4-45f8-b527-5ac0a1be28ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.138910771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b9e17f6-4dd4-45f8-b527-5ac0a1be28ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.139112523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4a04acb0a54a2a6109e09a1d7b8a17583d9484af183e3fc6981d412e7ba59b1,PodSandboxId:35b1358a3a9cba63aa46b4eb80c88e83d9365d8e757da58ffffedb636bebfd54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759148226681280809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648888e4077d1078096b2ecb31c2c4c48da18dc50dad463405bc4620991ba4a,PodSandboxId:62547d9ad28628ca912501378960b868fa287c876c09caed5e895cb1954f14db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759148222395625949,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3dd5527fe19a0d44bd21bdc6d14bd3dfc0d1b2c66db5c654ea78b2d7be4872,PodSandboxId:10f7f00b1bebfb9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46
169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759148222129685627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5c86882efe9e1834cd208fe8a1e9eb458614093634096e76edb181c1df14a9,PodSandboxId:a68d0add50e5820e3f0c7d90f52d36a7af3076b653f3cf74b13d9e1ddfc1322d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1
308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759148222124745391,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397cac24c5ac7061b13d625683b63d7d76c23e5890a9504f70a685ec8975298e,PodSandboxId:8186f992b9e70bedfcc08d7e17bfca74849f286fe1f024ee36a8339a5d112022,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759148222094262842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c576caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92ea40987ef263104fc84da9b5fab382f0ca424dc09d428b06e20eb93c2447c,PodSandboxId:10f7f00b1bebf
b9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759148217202059417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:1f47960f0db4d842151528da6c7020d59f4d6bede1042da342f257bc15eaf437,PodSandboxId:ecf865a6a2c06042ef713c25e8e304fa52cc9c82cb6e8531a9b937877d5ddb23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759148124459942126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-p
robe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98a67edfdc65a81f26cbc0e77bfa203408717769b7fcfcad8430bdcfd69b548,PodSandboxId:94659dbfe6f3fee1b59a902b9fb496d33d1ae4fdf789d117c860d245c3508f59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759148123370745209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c57
6caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d57fcd84b137bc2cabf619630723bdacad6955827512a828ae2a79965a3466,PodSandboxId:f551896494f877f49534d4ac0a30bd1bcbe216a2adc24e5a3543e2b29f35dabe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759148123239682911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9ac69f6bf6aa520160c113d51efdb95c05d442b7ed06d627b337b0fe5f1eca,PodSandboxId:851671ab9c92004431b41b5ca1a28a8569bb28a1f6cdd4286bb9c5139e58c055,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:175914812314130962
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948b9271b3533c244b8b8078effff4d6dc750055e7998f71476db3ff7100e454,PodSandboxId:bd1208889672e5f8e257662b8024a6df8bf526064833e29b0ce8143a1c3d9c4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759148068168237661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whtqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 418d6401-682d-449d-b126-511492131712,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b9e17f6-4dd4-45f8-b527-5ac0a1be28ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.184304578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2215d29e-65d7-4006-ad4a-789d9ab540a6 name=/runtime.v1.RuntimeService/Version
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.184393896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2215d29e-65d7-4006-ad4a-789d9ab540a6 name=/runtime.v1.RuntimeService/Version
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.185830844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8fbc98c-cc92-4a29-8686-1f51cd680685 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.186207441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759148469186185761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8fbc98c-cc92-4a29-8686-1f51cd680685 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.186823630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abf3919d-6ffd-4f4f-af34-6464cfdb09bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.186890954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abf3919d-6ffd-4f4f-af34-6464cfdb09bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 29 12:21:09 pause-448284 crio[3440]: time="2025-09-29 12:21:09.187092071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4a04acb0a54a2a6109e09a1d7b8a17583d9484af183e3fc6981d412e7ba59b1,PodSandboxId:35b1358a3a9cba63aa46b4eb80c88e83d9365d8e757da58ffffedb636bebfd54,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759148226681280809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648888e4077d1078096b2ecb31c2c4c48da18dc50dad463405bc4620991ba4a,PodSandboxId:62547d9ad28628ca912501378960b868fa287c876c09caed5e895cb1954f14db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1759148222395625949,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d3dd5527fe19a0d44bd21bdc6d14bd3dfc0d1b2c66db5c654ea78b2d7be4872,PodSandboxId:10f7f00b1bebfb9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46
169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1759148222129685627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5c86882efe9e1834cd208fe8a1e9eb458614093634096e76edb181c1df14a9,PodSandboxId:a68d0add50e5820e3f0c7d90f52d36a7af3076b653f3cf74b13d9e1ddfc1322d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1
308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1759148222124745391,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397cac24c5ac7061b13d625683b63d7d76c23e5890a9504f70a685ec8975298e,PodSandboxId:8186f992b9e70bedfcc08d7e17bfca74849f286fe1f024ee36a8339a5d112022,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759148222094262842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c576caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92ea40987ef263104fc84da9b5fab382f0ca424dc09d428b06e20eb93c2447c,PodSandboxId:10f7f00b1bebf
b9325c369cf3e11b4a3120a1470ea368fc078c5866e3c4ab5b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1759148217202059417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d0213b81f4d8434185ab16449a976d,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:1f47960f0db4d842151528da6c7020d59f4d6bede1042da342f257bc15eaf437,PodSandboxId:ecf865a6a2c06042ef713c25e8e304fa52cc9c82cb6e8531a9b937877d5ddb23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759148124459942126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gdw6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6f40860-3a4c-4115-b188-796234dcd556,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-p
robe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98a67edfdc65a81f26cbc0e77bfa203408717769b7fcfcad8430bdcfd69b548,PodSandboxId:94659dbfe6f3fee1b59a902b9fb496d33d1ae4fdf789d117c860d245c3508f59,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759148123370745209,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596d051327f8412b28855c57
6caf740,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5d57fcd84b137bc2cabf619630723bdacad6955827512a828ae2a79965a3466,PodSandboxId:f551896494f877f49534d4ac0a30bd1bcbe216a2adc24e5a3543e2b29f35dabe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1759148123239682911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0885f4db30b92999f03d70034a18a6f9,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9ac69f6bf6aa520160c113d51efdb95c05d442b7ed06d627b337b0fe5f1eca,PodSandboxId:851671ab9c92004431b41b5ca1a28a8569bb28a1f6cdd4286bb9c5139e58c055,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:175914812314130962
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-448284,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fdb09b50f2f31fcab7fd51bd0d13713,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948b9271b3533c244b8b8078effff4d6dc750055e7998f71476db3ff7100e454,PodSandboxId:bd1208889672e5f8e257662b8024a6df8bf526064833e29b0ce8143a1c3d9c4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1759148068168237661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-whtqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 418d6401-682d-449d-b126-511492131712,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abf3919d-6ffd-4f4f-af34-6464cfdb09bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4a04acb0a54a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   4 minutes ago       Running             coredns                   2                   35b1358a3a9cb       coredns-66bc5c9577-gdw6h
	f648888e4077d       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   4 minutes ago       Running             kube-controller-manager   2                   62547d9ad2862       kube-controller-manager-pause-448284
	1d3dd5527fe19       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   4 minutes ago       Running             kube-scheduler            3                   10f7f00b1bebf       kube-scheduler-pause-448284
	7a5c86882efe9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   4 minutes ago       Running             kube-apiserver            2                   a68d0add50e58       kube-apiserver-pause-448284
	397cac24c5ac7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   4 minutes ago       Running             etcd                      2                   8186f992b9e70       etcd-pause-448284
	e92ea40987ef2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   4 minutes ago       Exited              kube-scheduler            2                   10f7f00b1bebf       kube-scheduler-pause-448284
	1f47960f0db4d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   5 minutes ago       Exited              coredns                   1                   ecf865a6a2c06       coredns-66bc5c9577-gdw6h
	e98a67edfdc65       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   5 minutes ago       Exited              etcd                      1                   94659dbfe6f3f       etcd-pause-448284
	f5d57fcd84b13       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   5 minutes ago       Exited              kube-apiserver            1                   f551896494f87       kube-apiserver-pause-448284
	0b9ac69f6bf6a       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   5 minutes ago       Exited              kube-controller-manager   1                   851671ab9c920       kube-controller-manager-pause-448284
	948b9271b3533       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   6 minutes ago       Exited              kube-proxy                0                   bd1208889672e       kube-proxy-whtqx
	
	
	==> coredns [1f47960f0db4d842151528da6c7020d59f4d6bede1042da342f257bc15eaf437] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47632 - 31007 "HINFO IN 5144337710322560192.4447753507118516859. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0230239s
	
	
	==> coredns [d4a04acb0a54a2a6109e09a1d7b8a17583d9484af183e3fc6981d412e7ba59b1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54962 - 8975 "HINFO IN 5876726147971458588.1824388722948007112. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021174286s
	
	
	==> describe nodes <==
	Name:               pause-448284
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-448284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=pause-448284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_14_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:14:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-448284
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:21:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:17:06 +0000   Mon, 29 Sep 2025 12:14:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:17:06 +0000   Mon, 29 Sep 2025 12:14:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:17:06 +0000   Mon, 29 Sep 2025 12:14:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:17:06 +0000   Mon, 29 Sep 2025 12:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.251
	  Hostname:    pause-448284
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 42d25782885944fea62f753d25e39dbc
	  System UUID:                42d25782-8859-44fe-a62f-753d25e39dbc
	  Boot ID:                    05d5e510-fa0a-43f6-b526-551bd4d4aebb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gdw6h                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     6m42s
	  kube-system                 etcd-pause-448284                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         6m47s
	  kube-system                 kube-apiserver-pause-448284             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-controller-manager-pause-448284    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-proxy-whtqx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-scheduler-pause-448284             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m40s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m53s (x7 over 6m53s)  kubelet          Node pause-448284 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    6m53s (x8 over 6m53s)  kubelet          Node pause-448284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m53s (x8 over 6m53s)  kubelet          Node pause-448284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m47s                  kubelet          Node pause-448284 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m47s                  kubelet          Node pause-448284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s                  kubelet          Node pause-448284 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                6m46s                  kubelet          Node pause-448284 status is now: NodeReady
	  Normal  RegisteredNode           6m43s                  node-controller  Node pause-448284 event: Registered Node pause-448284 in Controller
	  Normal  Starting                 4m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)    kubelet          Node pause-448284 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)    kubelet          Node pause-448284 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)    kubelet          Node pause-448284 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                     node-controller  Node pause-448284 event: Registered Node pause-448284 in Controller
	
	
	==> dmesg <==
	[Sep29 12:13] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.011470] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.163228] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep29 12:14] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.112761] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.130706] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.135511] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.025318] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.314745] kauditd_printk_skb: 222 callbacks suppressed
	[Sep29 12:15] kauditd_printk_skb: 38 callbacks suppressed
	[Sep29 12:16] kauditd_printk_skb: 261 callbacks suppressed
	[Sep29 12:17] kauditd_printk_skb: 148 callbacks suppressed
	[  +5.496565] kauditd_printk_skb: 99 callbacks suppressed
	
	
	==> etcd [397cac24c5ac7061b13d625683b63d7d76c23e5890a9504f70a685ec8975298e] <==
	{"level":"warn","ts":"2025-09-29T12:17:04.711933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.746027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.757083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.781867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.808232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.829746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.851116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.867314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.888969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.903553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.926923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.937713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.960658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:04.999961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:05.027918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:05.042914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:05.055312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:17:05.121853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60238","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:17:06.961975Z","caller":"traceutil/trace.go:172","msg":"trace[1196346867] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"122.089596ms","start":"2025-09-29T12:17:06.839869Z","end":"2025-09-29T12:17:06.961959Z","steps":["trace[1196346867] 'process raft request'  (duration: 121.991597ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T12:18:48.197038Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.05789ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16204401801224095093 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-448284\" mod_revision:561 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-448284\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-448284\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T12:18:48.197319Z","caller":"traceutil/trace.go:172","msg":"trace[663578881] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"227.055147ms","start":"2025-09-29T12:18:47.970236Z","end":"2025-09-29T12:18:48.197291Z","steps":["trace[663578881] 'process raft request'  (duration: 100.812996ms)","trace[663578881] 'compare'  (duration: 123.904332ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T12:19:12.661651Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.853103ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16204401801224095290 > lease_revoke:<id:60e1999567a985bb>","response":"size:27"}
	{"level":"info","ts":"2025-09-29T12:19:12.661774Z","caller":"traceutil/trace.go:172","msg":"trace[54449474] linearizableReadLoop","detail":"{readStateIndex:648; appliedIndex:647; }","duration":"263.063893ms","start":"2025-09-29T12:19:12.398691Z","end":"2025-09-29T12:19:12.661755Z","steps":["trace[54449474] 'read index received'  (duration: 66.414µs)","trace[54449474] 'applied index is now lower than readState.Index'  (duration: 262.996226ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T12:19:12.662021Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.286838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-448284\" limit:1 ","response":"range_response_count:1 size:5280"}
	{"level":"info","ts":"2025-09-29T12:19:12.662054Z","caller":"traceutil/trace.go:172","msg":"trace[1353579687] range","detail":"{range_begin:/registry/minions/pause-448284; range_end:; response_count:1; response_revision:577; }","duration":"263.360883ms","start":"2025-09-29T12:19:12.398682Z","end":"2025-09-29T12:19:12.662042Z","steps":["trace[1353579687] 'agreement among raft nodes before linearized reading'  (duration: 263.177031ms)"],"step_count":1}
	
	
	==> etcd [e98a67edfdc65a81f26cbc0e77bfa203408717769b7fcfcad8430bdcfd69b548] <==
	{"level":"info","ts":"2025-09-29T12:15:24.401195Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T12:15:24.402110Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T12:15:24.447081Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-09-29T12:15:24.451857Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-09-29T12:15:24.452989Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-29T12:15:24.496166Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T12:15:24.704954Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.251:2379"}
	{"level":"info","ts":"2025-09-29T12:15:25.327207Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:15:25.327638Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-448284","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.251:2380"],"advertise-client-urls":["https://192.168.50.251:2379"]}
	{"level":"error","ts":"2025-09-29T12:15:25.327750Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:15:25.331450Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:15:25.331539Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:15:25.331569Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"439bb489ce44e0e1","current-leader-member-id":"439bb489ce44e0e1"}
	{"level":"info","ts":"2025-09-29T12:15:25.331655Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T12:15:25.331665Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:15:25.334620Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:15:25.336116Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:15:25.336022Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.251:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:15:25.336343Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.251:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:15:25.336363Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.251:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T12:15:25.338154Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:15:25.346322Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.251:2380"}
	{"level":"error","ts":"2025-09-29T12:15:25.346397Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.251:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:15:25.346424Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.251:2380"}
	{"level":"info","ts":"2025-09-29T12:15:25.346430Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-448284","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.251:2380"],"advertise-client-urls":["https://192.168.50.251:2379"]}
	
	
	==> kernel <==
	 12:21:09 up 7 min,  0 users,  load average: 0.14, 0.42, 0.24
	Linux pause-448284 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7a5c86882efe9e1834cd208fe8a1e9eb458614093634096e76edb181c1df14a9] <==
	I0929 12:17:06.059091       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 12:17:06.059158       1 policy_source.go:240] refreshing policies
	I0929 12:17:06.076731       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 12:17:06.108044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0929 12:17:06.118640       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0929 12:17:06.124540       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0929 12:17:06.127206       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0929 12:17:06.130380       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0929 12:17:06.132662       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0929 12:17:06.147418       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0929 12:17:06.393610       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 12:17:06.970658       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 12:17:07.799643       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 12:17:07.856476       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 12:17:07.894506       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 12:17:07.903073       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 12:17:09.746894       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 12:17:09.855264       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 12:17:09.893567       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 12:18:04.208903       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:18:25.741263       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:19:20.374898       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:19:29.507909       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:20:39.544498       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:20:47.842140       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [f5d57fcd84b137bc2cabf619630723bdacad6955827512a828ae2a79965a3466] <==
	W0929 12:15:25.549470       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0929 12:15:25.549570       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0929 12:15:25.553605       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0929 12:15:25.562146       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0929 12:15:25.571116       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0929 12:15:25.571154       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0929 12:15:25.571448       1 instance.go:239] Using reconciler: lease
	W0929 12:15:25.573187       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0929 12:15:25.583558       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:26.548902       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:26.550236       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:26.585041       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:28.182955       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:28.323230       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:28.490945       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:30.434093       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:30.654294       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:31.120690       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:33.994186       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:34.907614       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:35.216037       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:39.525223       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:41.801657       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:15:42.467486       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0929 12:15:45.573264       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [0b9ac69f6bf6aa520160c113d51efdb95c05d442b7ed06d627b337b0fe5f1eca] <==
	I0929 12:15:25.323615       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [f648888e4077d1078096b2ecb31c2c4c48da18dc50dad463405bc4620991ba4a] <==
	I0929 12:17:09.390041       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:17:09.390834       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:17:09.395162       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:17:09.398614       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:17:09.401863       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 12:17:09.406246       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:17:09.410573       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 12:17:09.410615       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 12:17:09.417017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 12:17:09.418239       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:17:09.418377       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 12:17:09.421679       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 12:17:09.422889       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:17:09.422928       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:17:09.424071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 12:17:09.429383       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 12:17:09.431564       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 12:17:09.432727       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:17:09.436017       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 12:17:09.438512       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:17:09.440000       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 12:17:09.440191       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:17:09.449670       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:17:09.449687       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:17:09.449695       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [948b9271b3533c244b8b8078effff4d6dc750055e7998f71476db3ff7100e454] <==
	I0929 12:14:28.868298       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:14:28.969303       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:14:28.969345       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.251"]
	E0929 12:14:28.969466       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:14:29.237722       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0929 12:14:29.237777       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0929 12:14:29.237907       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:14:29.262383       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:14:29.264881       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:14:29.265078       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:14:29.302542       1 config.go:200] "Starting service config controller"
	I0929 12:14:29.334104       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:14:29.303884       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:14:29.337452       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:14:29.303898       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:14:29.337614       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:14:29.339584       1 config.go:309] "Starting node config controller"
	I0929 12:14:29.339674       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:14:29.339685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:14:29.437397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:14:29.437625       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:14:29.440085       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1d3dd5527fe19a0d44bd21bdc6d14bd3dfc0d1b2c66db5c654ea78b2d7be4872] <==
	I0929 12:17:04.722270       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:17:05.954426       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:17:05.954462       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:17:05.954471       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:17:05.954476       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:17:06.037079       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:17:06.037848       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:17:06.044124       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:17:06.044163       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:17:06.044441       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:17:06.044456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:17:06.144961       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e92ea40987ef263104fc84da9b5fab382f0ca424dc09d428b06e20eb93c2447c] <==
	I0929 12:16:58.089515       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:16:58.726068       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.50.251:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.251:8443: connect: connection refused
	W0929 12:16:58.726110       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:16:58.726120       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:16:58.733629       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:16:58.733678       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0929 12:16:58.733705       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0929 12:16:58.738408       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:16:58.738513       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:16:58.738902       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0929 12:16:58.738993       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E0929 12:16:58.739190       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:16:58.739223       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:16:58.739252       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:16:58.739281       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:16:58.739420       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:16:58.739462       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:16:58.739472       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:16:58.739486       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 12:20:31 pause-448284 kubelet[4083]: E0929 12:20:31.663125    4083 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759148431662688050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:20:31 pause-448284 kubelet[4083]: E0929 12:20:31.663144    4083 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759148431662688050  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:20:40 pause-448284 kubelet[4083]: E0929 12:20:40.435863    4083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists"
	Sep 29 12:20:40 pause-448284 kubelet[4083]: E0929 12:20:40.436086    4083 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists" pod="kube-system/kube-proxy-whtqx"
	Sep 29 12:20:40 pause-448284 kubelet[4083]: E0929 12:20:40.436108    4083 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists" pod="kube-system/kube-proxy-whtqx"
	Sep 29 12:20:40 pause-448284 kubelet[4083]: E0929 12:20:40.436516    4083 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-whtqx_kube-system(418d6401-682d-449d-b126-511492131712)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-whtqx_kube-system(418d6401-682d-449d-b126-511492131712)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\\\" already exists\"" pod="kube-system/kube-proxy-whtqx" podUID="418d6401-682d-449d-b126-511492131712"
	Sep 29 12:20:41 pause-448284 kubelet[4083]: E0929 12:20:41.668322    4083 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759148441667274517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:20:41 pause-448284 kubelet[4083]: E0929 12:20:41.668603    4083 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759148441667274517  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:20:51 pause-448284 kubelet[4083]: E0929 12:20:51.671764    4083 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759148451671535408  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:20:51 pause-448284 kubelet[4083]: E0929 12:20:51.672308    4083 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759148451671535408  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:20:55 pause-448284 kubelet[4083]: E0929 12:20:55.435229    4083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists"
	Sep 29 12:20:55 pause-448284 kubelet[4083]: E0929 12:20:55.435300    4083 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists" pod="kube-system/kube-proxy-whtqx"
	Sep 29 12:20:55 pause-448284 kubelet[4083]: E0929 12:20:55.435315    4083 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists" pod="kube-system/kube-proxy-whtqx"
	Sep 29 12:20:55 pause-448284 kubelet[4083]: E0929 12:20:55.435352    4083 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-whtqx_kube-system(418d6401-682d-449d-b126-511492131712)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-whtqx_kube-system(418d6401-682d-449d-b126-511492131712)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\\\" already exists\"" pod="kube-system/kube-proxy-whtqx" podUID="418d6401-682d-449d-b126-511492131712"
	Sep 29 12:21:01 pause-448284 kubelet[4083]: E0929 12:21:01.600450    4083 manager.go:1116] Failed to create existing container: /kubepods/burstable/podf6f40860-3a4c-4115-b188-796234dcd556/crio-ecf865a6a2c06042ef713c25e8e304fa52cc9c82cb6e8531a9b937877d5ddb23: Error finding container ecf865a6a2c06042ef713c25e8e304fa52cc9c82cb6e8531a9b937877d5ddb23: Status 404 returned error can't find the container with id ecf865a6a2c06042ef713c25e8e304fa52cc9c82cb6e8531a9b937877d5ddb23
	Sep 29 12:21:01 pause-448284 kubelet[4083]: E0929 12:21:01.600720    4083 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod9fdb09b50f2f31fcab7fd51bd0d13713/crio-851671ab9c92004431b41b5ca1a28a8569bb28a1f6cdd4286bb9c5139e58c055: Error finding container 851671ab9c92004431b41b5ca1a28a8569bb28a1f6cdd4286bb9c5139e58c055: Status 404 returned error can't find the container with id 851671ab9c92004431b41b5ca1a28a8569bb28a1f6cdd4286bb9c5139e58c055
	Sep 29 12:21:01 pause-448284 kubelet[4083]: E0929 12:21:01.601198    4083 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod418d6401-682d-449d-b126-511492131712/crio-bd1208889672e5f8e257662b8024a6df8bf526064833e29b0ce8143a1c3d9c4c: Error finding container bd1208889672e5f8e257662b8024a6df8bf526064833e29b0ce8143a1c3d9c4c: Status 404 returned error can't find the container with id bd1208889672e5f8e257662b8024a6df8bf526064833e29b0ce8143a1c3d9c4c
	Sep 29 12:21:01 pause-448284 kubelet[4083]: E0929 12:21:01.601550    4083 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod0885f4db30b92999f03d70034a18a6f9/crio-f551896494f877f49534d4ac0a30bd1bcbe216a2adc24e5a3543e2b29f35dabe: Error finding container f551896494f877f49534d4ac0a30bd1bcbe216a2adc24e5a3543e2b29f35dabe: Status 404 returned error can't find the container with id f551896494f877f49534d4ac0a30bd1bcbe216a2adc24e5a3543e2b29f35dabe
	Sep 29 12:21:01 pause-448284 kubelet[4083]: E0929 12:21:01.601685    4083 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3596d051327f8412b28855c576caf740/crio-94659dbfe6f3fee1b59a902b9fb496d33d1ae4fdf789d117c860d245c3508f59: Error finding container 94659dbfe6f3fee1b59a902b9fb496d33d1ae4fdf789d117c860d245c3508f59: Status 404 returned error can't find the container with id 94659dbfe6f3fee1b59a902b9fb496d33d1ae4fdf789d117c860d245c3508f59
	Sep 29 12:21:01 pause-448284 kubelet[4083]: E0929 12:21:01.675046    4083 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759148461674354202  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:21:01 pause-448284 kubelet[4083]: E0929 12:21:01.675112    4083 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759148461674354202  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 29 12:21:09 pause-448284 kubelet[4083]: E0929 12:21:09.437998    4083 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists"
	Sep 29 12:21:09 pause-448284 kubelet[4083]: E0929 12:21:09.438038    4083 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists" pod="kube-system/kube-proxy-whtqx"
	Sep 29 12:21:09 pause-448284 kubelet[4083]: E0929 12:21:09.438053    4083 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\" already exists" pod="kube-system/kube-proxy-whtqx"
	Sep 29 12:21:09 pause-448284 kubelet[4083]: E0929 12:21:09.438095    4083 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-whtqx_kube-system(418d6401-682d-449d-b126-511492131712)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-whtqx_kube-system(418d6401-682d-449d-b126-511492131712)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-whtqx_kube-system_418d6401-682d-449d-b126-511492131712_1\\\" already exists\"" pod="kube-system/kube-proxy-whtqx" podUID="418d6401-682d-449d-b126-511492131712"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-448284 -n pause-448284
helpers_test.go:269: (dbg) Run:  kubectl --context pause-448284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (361.92s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.33
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 12.33
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 86.32
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 199.71
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.54
35 TestAddons/parallel/Registry 18.81
36 TestAddons/parallel/RegistryCreds 0.69
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 6.04
41 TestAddons/parallel/CSI 63.86
42 TestAddons/parallel/Headlamp 22.3
43 TestAddons/parallel/CloudSpanner 6.63
44 TestAddons/parallel/LocalPath 59.62
45 TestAddons/parallel/NvidiaDevicePlugin 7.05
46 TestAddons/parallel/Yakd 12
48 TestAddons/StoppedEnableDisable 71.81
49 TestCertOptions 59.53
50 TestCertExpiration 295.42
52 TestForceSystemdFlag 53.96
53 TestForceSystemdEnv 46.04
55 TestKVMDriverInstallOrUpdate 0.71
59 TestErrorSpam/setup 38.98
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.64
63 TestErrorSpam/unpause 1.87
64 TestErrorSpam/stop 92.71
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 53.78
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 60.46
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
76 TestFunctional/serial/CacheCmd/cache/add_local 2.15
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 31.59
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.49
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 4.74
90 TestFunctional/parallel/ConfigCmd 0.38
91 TestFunctional/parallel/DashboardCmd 12.96
92 TestFunctional/parallel/DryRun 0.32
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 1
98 TestFunctional/parallel/ServiceCmdConnect 22.49
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 39.26
102 TestFunctional/parallel/SSHCmd 0.39
103 TestFunctional/parallel/CpCmd 1.45
104 TestFunctional/parallel/MySQL 25.3
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.31
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.78
118 TestFunctional/parallel/ImageCommands/ImageListShort 1.26
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
122 TestFunctional/parallel/ImageCommands/ImageBuild 6.44
123 TestFunctional/parallel/ImageCommands/Setup 1.74
124 TestFunctional/parallel/MountCmd/any-port 17.47
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.61
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.6
129 TestFunctional/parallel/ServiceCmd/List 0.48
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
134 TestFunctional/parallel/ServiceCmd/Format 0.32
135 TestFunctional/parallel/ServiceCmd/URL 0.32
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.12
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
138 TestFunctional/parallel/ProfileCmd/profile_list 0.39
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
152 TestFunctional/parallel/MountCmd/specific-port 1.96
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 210.61
162 TestMultiControlPlane/serial/DeployApp 7.14
163 TestMultiControlPlane/serial/PingHostFromPods 1.23
164 TestMultiControlPlane/serial/AddWorkerNode 48.13
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
167 TestMultiControlPlane/serial/CopyFile 13.45
168 TestMultiControlPlane/serial/StopSecondaryNode 88.75
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
170 TestMultiControlPlane/serial/RestartSecondaryNode 33.31
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.98
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 374.8
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.42
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
175 TestMultiControlPlane/serial/StopCluster 254.02
176 TestMultiControlPlane/serial/RestartCluster 102.98
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
178 TestMultiControlPlane/serial/AddSecondaryNode 73.75
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
183 TestJSONOutput/start/Command 79.59
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.21
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 80.9
215 TestMountStart/serial/StartWithMountFirst 20.65
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 20.77
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.36
222 TestMountStart/serial/RestartStopped 19.73
223 TestMountStart/serial/VerifyMountPostStop 0.39
226 TestMultiNode/serial/FreshStart2Nodes 129.37
227 TestMultiNode/serial/DeployApp2Nodes 6
228 TestMultiNode/serial/PingHostFrom2Pods 0.78
229 TestMultiNode/serial/AddNode 45.75
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.6
232 TestMultiNode/serial/CopyFile 7.55
233 TestMultiNode/serial/StopNode 2.44
234 TestMultiNode/serial/StartAfterStop 38.16
235 TestMultiNode/serial/RestartKeepsNodes 339.72
236 TestMultiNode/serial/DeleteNode 2.68
237 TestMultiNode/serial/StopMultiNode 169.43
238 TestMultiNode/serial/RestartMultiNode 95.44
239 TestMultiNode/serial/ValidateNameConflict 37.77
246 TestScheduledStopUnix 108.95
250 TestRunningBinaryUpgrade 78.88
252 TestKubernetesUpgrade 173.58
254 TestStoppedBinaryUpgrade/Setup 2.62
266 TestPause/serial/Start 107.89
267 TestStoppedBinaryUpgrade/Upgrade 153.2
272 TestNetworkPlugins/group/false 3.28
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
278 TestNoKubernetes/serial/StartWithK8s 76.9
279 TestNoKubernetes/serial/StartWithStopK8s 33.5
281 TestNoKubernetes/serial/Start 39.18
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
284 TestNoKubernetes/serial/ProfileList 9.14
285 TestNoKubernetes/serial/Stop 1.31
286 TestNoKubernetes/serial/StartNoArgs 41.43
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
289 TestStartStop/group/old-k8s-version/serial/FirstStart 102.16
291 TestStartStop/group/embed-certs/serial/FirstStart 86.92
292 TestStartStop/group/old-k8s-version/serial/DeployApp 11.32
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
294 TestStartStop/group/old-k8s-version/serial/Stop 74.93
295 TestStartStop/group/embed-certs/serial/DeployApp 11.28
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
297 TestStartStop/group/embed-certs/serial/Stop 80.85
299 TestStartStop/group/no-preload/serial/FirstStart 97.93
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
301 TestStartStop/group/old-k8s-version/serial/SecondStart 62.18
302 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
303 TestStartStop/group/embed-certs/serial/SecondStart 54.76
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.29
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/old-k8s-version/serial/Pause 3.39
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9
312 TestStartStop/group/newest-cni/serial/FirstStart 62.08
313 TestStartStop/group/no-preload/serial/DeployApp 10.37
314 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
316 TestStartStop/group/embed-certs/serial/Pause 3.37
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.52
318 TestStartStop/group/no-preload/serial/Stop 86.54
319 TestNetworkPlugins/group/auto/Start 103.79
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.99
322 TestStartStop/group/newest-cni/serial/Stop 8.08
323 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
324 TestStartStop/group/newest-cni/serial/SecondStart 34.29
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 81.68
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/no-preload/serial/SecondStart 60.68
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/newest-cni/serial/Pause 2.64
334 TestNetworkPlugins/group/kindnet/Start 105.12
335 TestNetworkPlugins/group/auto/KubeletFlags 0.22
336 TestNetworkPlugins/group/auto/NetCatPod 10.23
337 TestNetworkPlugins/group/auto/DNS 0.22
338 TestNetworkPlugins/group/auto/Localhost 0.15
339 TestNetworkPlugins/group/auto/HairPin 0.16
340 TestNetworkPlugins/group/calico/Start 75.38
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.12
344 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
345 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
346 TestStartStop/group/no-preload/serial/Pause 3.41
347 TestNetworkPlugins/group/custom-flannel/Start 81.62
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 19.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
351 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.25
354 TestNetworkPlugins/group/calico/NetCatPod 10.27
355 TestNetworkPlugins/group/kindnet/DNS 0.19
356 TestNetworkPlugins/group/kindnet/Localhost 0.16
357 TestNetworkPlugins/group/kindnet/HairPin 0.16
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
359 TestNetworkPlugins/group/calico/DNS 0.2
360 TestNetworkPlugins/group/calico/Localhost 0.17
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
362 TestNetworkPlugins/group/calico/HairPin 0.2
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.34
364 TestNetworkPlugins/group/flannel/Start 79.51
365 TestNetworkPlugins/group/bridge/Start 100.86
366 TestNetworkPlugins/group/enable-default-cni/Start 113.11
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
369 TestNetworkPlugins/group/custom-flannel/DNS 0.18
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
374 TestNetworkPlugins/group/flannel/NetCatPod 10.23
375 TestNetworkPlugins/group/flannel/DNS 0.15
376 TestNetworkPlugins/group/flannel/Localhost 0.13
377 TestNetworkPlugins/group/flannel/HairPin 0.14
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
379 TestNetworkPlugins/group/bridge/NetCatPod 11.29
380 TestNetworkPlugins/group/bridge/DNS 0.15
381 TestNetworkPlugins/group/bridge/Localhost 0.13
382 TestNetworkPlugins/group/bridge/HairPin 0.12
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (22.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-815607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-815607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.333932538s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 11:16:11.930259  369423 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 11:16:11.930410  369423 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-815607
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-815607: exit status 85 (61.519608ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-815607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-815607 │ jenkins │ v1.37.0 │ 29 Sep 25 11:15 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:15:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:15:49.638430  369435 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:15:49.638767  369435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:15:49.638790  369435 out.go:374] Setting ErrFile to fd 2...
	I0929 11:15:49.638797  369435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:15:49.639075  369435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	W0929 11:15:49.639252  369435 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21655-365455/.minikube/config/config.json: open /home/jenkins/minikube-integration/21655-365455/.minikube/config/config.json: no such file or directory
	I0929 11:15:49.639803  369435 out.go:368] Setting JSON to true
	I0929 11:15:49.640916  369435 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3492,"bootTime":1759141058,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:15:49.641044  369435 start.go:140] virtualization: kvm guest
	I0929 11:15:49.642954  369435 out.go:99] [download-only-815607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 11:15:49.643112  369435 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 11:15:49.643157  369435 notify.go:220] Checking for updates...
	I0929 11:15:49.644453  369435 out.go:171] MINIKUBE_LOCATION=21655
	I0929 11:15:49.645750  369435 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:15:49.646906  369435 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 11:15:49.648121  369435 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 11:15:49.649113  369435 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 11:15:49.650830  369435 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:15:49.651136  369435 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:15:49.681452  369435 out.go:99] Using the kvm2 driver based on user configuration
	I0929 11:15:49.681492  369435 start.go:304] selected driver: kvm2
	I0929 11:15:49.681502  369435 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:15:49.681851  369435 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:15:49.681968  369435 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:15:49.696170  369435 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:15:49.696205  369435 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:15:49.709892  369435 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:15:49.709945  369435 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:15:49.710457  369435 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0929 11:15:49.710618  369435 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:15:49.710653  369435 cni.go:84] Creating CNI manager for ""
	I0929 11:15:49.710727  369435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:15:49.710737  369435 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:15:49.710789  369435 start.go:348] cluster config:
	{Name:download-only-815607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-815607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:15:49.710985  369435 iso.go:125] acquiring lock: {Name:mkf6a4bd1628698e7eb4c42d44aa8328e64686d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:15:49.712616  369435 out.go:99] Downloading VM boot image ...
	I0929 11:15:49.712652  369435 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21655-365455/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I0929 11:15:59.326570  369435 out.go:99] Starting "download-only-815607" primary control-plane node in "download-only-815607" cluster
	I0929 11:15:59.326701  369435 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 11:15:59.424333  369435 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:15:59.424377  369435 cache.go:58] Caching tarball of preloaded images
	I0929 11:15:59.424617  369435 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 11:15:59.426087  369435 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 11:15:59.426108  369435 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:15:59.526503  369435 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-815607 host does not exist
	  To start a cluster, run: "minikube start -p download-only-815607"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-815607
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (12.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-880021 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-880021 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.333298436s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (12.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 11:16:24.605568  369423 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 11:16:24.605624  369423 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-880021
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-880021: exit status 85 (62.182093ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-815607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-815607 │ jenkins │ v1.37.0 │ 29 Sep 25 11:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │ 29 Sep 25 11:16 UTC │
	│ delete  │ -p download-only-815607                                                                                                                                                                             │ download-only-815607 │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │ 29 Sep 25 11:16 UTC │
	│ start   │ -o=json --download-only -p download-only-880021 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-880021 │ jenkins │ v1.37.0 │ 29 Sep 25 11:16 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:16:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:16:12.314769  369674 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:16:12.315047  369674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:16:12.315058  369674 out.go:374] Setting ErrFile to fd 2...
	I0929 11:16:12.315063  369674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:16:12.315251  369674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 11:16:12.315759  369674 out.go:368] Setting JSON to true
	I0929 11:16:12.316743  369674 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3514,"bootTime":1759141058,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:16:12.316841  369674 start.go:140] virtualization: kvm guest
	I0929 11:16:12.318563  369674 out.go:99] [download-only-880021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:16:12.318738  369674 notify.go:220] Checking for updates...
	I0929 11:16:12.319822  369674 out.go:171] MINIKUBE_LOCATION=21655
	I0929 11:16:12.321045  369674 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:16:12.322114  369674 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 11:16:12.323147  369674 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 11:16:12.324214  369674 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 11:16:12.325918  369674 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:16:12.326181  369674 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:16:12.356490  369674 out.go:99] Using the kvm2 driver based on user configuration
	I0929 11:16:12.356541  369674 start.go:304] selected driver: kvm2
	I0929 11:16:12.356553  369674 start.go:924] validating driver "kvm2" against <nil>
	I0929 11:16:12.356874  369674 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:16:12.356968  369674 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:16:12.370774  369674 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:16:12.370811  369674 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21655-365455/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0929 11:16:12.384636  369674 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0929 11:16:12.384688  369674 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:16:12.385275  369674 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0929 11:16:12.385431  369674 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:16:12.385463  369674 cni.go:84] Creating CNI manager for ""
	I0929 11:16:12.385507  369674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0929 11:16:12.385517  369674 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 11:16:12.385567  369674 start.go:348] cluster config:
	{Name:download-only-880021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-880021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:16:12.385649  369674 iso.go:125] acquiring lock: {Name:mkf6a4bd1628698e7eb4c42d44aa8328e64686d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:16:12.387150  369674 out.go:99] Starting "download-only-880021" primary control-plane node in "download-only-880021" cluster
	I0929 11:16:12.387167  369674 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:16:12.819769  369674 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:16:12.819799  369674 cache.go:58] Caching tarball of preloaded images
	I0929 11:16:12.819958  369674 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:16:12.821609  369674 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 11:16:12.821645  369674 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:16:13.292771  369674 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21655-365455/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-880021 host does not exist
	  To start a cluster, run: "minikube start -p download-only-880021"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-880021
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 11:16:25.216105  369423 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-390760 --alsologtostderr --binary-mirror http://127.0.0.1:46057 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-390760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-390760
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (86.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-130477 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-130477 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.357388616s)
helpers_test.go:175: Cleaning up "offline-crio-130477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-130477
E0929 12:14:46.268764  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestOffline (86.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-965504
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-965504: exit status 85 (53.204552ms)

                                                
                                                
-- stdout --
	* Profile "addons-965504" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-965504"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-965504
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-965504: exit status 85 (53.121425ms)

                                                
                                                
-- stdout --
	* Profile "addons-965504" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-965504"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (199.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-965504 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-965504 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m19.714226236s)
--- PASS: TestAddons/Setup (199.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-965504 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-965504 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-965504 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-965504 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c4801144-474a-40cc-9c33-ddafb69eddc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c4801144-474a-40cc-9c33-ddafb69eddc6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004877511s
addons_test.go:694: (dbg) Run:  kubectl --context addons-965504 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-965504 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-965504 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.927382ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-l6ndb" [1cf4fa8a-dd96-410b-be59-11d26570dc2f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003740104s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-86hvz" [e0c0f7c7-7f2e-460b-abcd-0429df2def6c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004270523s
addons_test.go:392: (dbg) Run:  kubectl --context addons-965504 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-965504 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-965504 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.952878611s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 ip
2025/09/29 11:20:22 [DEBUG] GET http://192.168.39.82:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.81s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.69627ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-965504
addons_test.go:332: (dbg) Run:  kubectl --context addons-965504 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-z5qw5" [df216355-05ab-43a4-8442-3cd9730f5c17] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004141145s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.42882ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kmt2w" [a2c18d6e-1539-4bb0-a8b6-f23dbb4ccb98] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009423893s
addons_test.go:463: (dbg) Run:  kubectl --context addons-965504 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 11:20:22.839648  369423 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 11:20:22.849521  369423 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 11:20:22.849563  369423 kapi.go:107] duration metric: took 9.939853ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.95459ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-965504 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-965504 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0b216f5c-b0b0-4ca3-a0c0-b6b8ac5d759d] Pending
helpers_test.go:352: "task-pv-pod" [0b216f5c-b0b0-4ca3-a0c0-b6b8ac5d759d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0b216f5c-b0b0-4ca3-a0c0-b6b8ac5d759d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004681799s
addons_test.go:572: (dbg) Run:  kubectl --context addons-965504 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-965504 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-965504 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-965504 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-965504 delete pod task-pv-pod: (1.079644946s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-965504 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-965504 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-965504 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [77a2ece1-acaf-4ddd-bf2e-77927b1739a3] Pending
helpers_test.go:352: "task-pv-pod-restore" [77a2ece1-acaf-4ddd-bf2e-77927b1739a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [77a2ece1-acaf-4ddd-bf2e-77927b1739a3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003804172s
addons_test.go:614: (dbg) Run:  kubectl --context addons-965504 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-965504 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-965504 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.901955325s)
--- PASS: TestAddons/parallel/CSI (63.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-965504 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-965504 --alsologtostderr -v=1: (1.154104517s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-xk92s" [0a2e70d2-d34f-4ee1-9cf4-43ebe3459b0f] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-xk92s" [0a2e70d2-d34f-4ee1-9cf4-43ebe3459b0f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-xk92s" [0a2e70d2-d34f-4ee1-9cf4-43ebe3459b0f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.012469238s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 addons disable headlamp --alsologtostderr -v=1: (6.135779545s)
--- PASS: TestAddons/parallel/Headlamp (22.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-jsxh7" [3d84bcb4-47c5-46e5-b569-b5e8eebc50bc] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003664578s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-965504 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-965504 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [0bd43911-c0e6-4286-b42e-14cb59bd1f3e] Pending
helpers_test.go:352: "test-local-path" [0bd43911-c0e6-4286-b42e-14cb59bd1f3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [0bd43911-c0e6-4286-b42e-14cb59bd1f3e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [0bd43911-c0e6-4286-b42e-14cb59bd1f3e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003795568s
addons_test.go:967: (dbg) Run:  kubectl --context addons-965504 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 ssh "cat /opt/local-path-provisioner/pvc-4b3cf6fe-7015-406b-bfc8-70f12bca1c19_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-965504 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-965504 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.303454949s)
--- PASS: TestAddons/parallel/LocalPath (59.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.05s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-4gm9t" [b4805764-22e3-451f-ad53-ca25d7965722] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007001642s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.043003866s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.05s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-srvtn" [fd9d7f0d-92cd-4ea3-849f-15a231e61491] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006342284s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-965504 addons disable yakd --alsologtostderr -v=1: (5.994096646s)
--- PASS: TestAddons/parallel/Yakd (12.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (71.81s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-965504
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-965504: (1m11.527860029s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-965504
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-965504
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-965504
--- PASS: TestAddons/StoppedEnableDisable (71.81s)

                                                
                                    
x
+
TestCertOptions (59.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-163071 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-163071 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.048998714s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-163071 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-163071 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-163071 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-163071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-163071
--- PASS: TestCertOptions (59.53s)

                                                
                                    
x
+
TestCertExpiration (295.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-356327 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-356327 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.348619631s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-356327 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-356327 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m12.170225672s)
helpers_test.go:175: Cleaning up "cert-expiration-356327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-356327
--- PASS: TestCertExpiration (295.42s)

                                                
                                    
x
+
TestForceSystemdFlag (53.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-785669 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-785669 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (52.735404173s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-785669 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-785669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-785669
--- PASS: TestForceSystemdFlag (53.96s)

                                                
                                    
x
+
TestForceSystemdEnv (46.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-554195 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-554195 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.975956557s)
helpers_test.go:175: Cleaning up "force-systemd-env-554195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-554195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-554195: (1.059011668s)
--- PASS: TestForceSystemdEnv (46.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.71s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 12:17:17.348185  369423 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 12:17:17.348341  369423 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3872954562/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 12:17:17.377598  369423 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3872954562/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 12:17:17.377655  369423 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 12:17:17.377806  369423 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 12:17:17.377854  369423 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3872954562/001/docker-machine-driver-kvm2
I0929 12:17:17.919834  369423 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3872954562/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 12:17:17.936042  369423 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3872954562/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.71s)

                                                
                                    
x
+
TestErrorSpam/setup (38.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-211614 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-211614 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:24:46.275596  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:46.282063  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:46.293437  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:46.314827  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:46.356263  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:46.437713  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:46.599341  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:46.921051  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:47.563256  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:48.844645  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:51.407548  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-211614 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-211614 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.978279305s)
--- PASS: TestErrorSpam/setup (38.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 status
E0929 11:24:56.529796  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (92.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 stop
E0929 11:25:06.771822  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:25:27.254041  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:26:08.217169  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 stop: (1m28.695498406s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 stop: (2.05321029s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-211614 --log_dir /tmp/nospam-211614 stop: (1.960450745s)
--- PASS: TestErrorSpam/stop (92.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21655-365455/.minikube/files/etc/test/nested/copy/369423/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668607 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-668607 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.77513129s)
--- PASS: TestFunctional/serial/StartWithProxy (53.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (60.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 11:27:27.726018  369423 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668607 --alsologtostderr -v=8
E0929 11:27:30.140052  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-668607 --alsologtostderr -v=8: (1m0.463163343s)
functional_test.go:678: soft start took 1m0.46403507s for "functional-668607" cluster.
I0929 11:28:28.189629  369423 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (60.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-668607 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 cache add registry.k8s.io/pause:3.1: (1.108784951s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 cache add registry.k8s.io/pause:3.3: (1.202638616s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 cache add registry.k8s.io/pause:latest: (1.140831231s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-668607 /tmp/TestFunctionalserialCacheCmdcacheadd_local3672985974/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cache add minikube-local-cache-test:functional-668607
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 cache add minikube-local-cache-test:functional-668607: (1.80384336s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cache delete minikube-local-cache-test:functional-668607
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-668607
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (225.092412ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 cache reload: (1.023051571s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 kubectl -- --context functional-668607 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-668607 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668607 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-668607 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.586604627s)
functional_test.go:776: restart took 31.586749536s for "functional-668607" cluster.
I0929 11:29:07.951601  369423 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (31.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-668607 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 logs: (1.490538067s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 logs --file /tmp/TestFunctionalserialLogsFileCmd278628797/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 logs --file /tmp/TestFunctionalserialLogsFileCmd278628797/001/logs.txt: (1.460842792s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-668607 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-668607
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-668607: exit status 115 (293.898701ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.200:32488 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-668607 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-668607 delete -f testdata/invalidsvc.yaml: (1.228727418s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 config get cpus: exit status 14 (72.256506ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 config get cpus: exit status 14 (54.748342ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668607 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668607 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 377496: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-668607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (165.38756ms)

                                                
                                                
-- stdout --
	* [functional-668607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:29:16.307247  377126 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:29:16.307541  377126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:29:16.307556  377126 out.go:374] Setting ErrFile to fd 2...
	I0929 11:29:16.307563  377126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:29:16.307863  377126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 11:29:16.308423  377126 out.go:368] Setting JSON to false
	I0929 11:29:16.309747  377126 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4298,"bootTime":1759141058,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:29:16.309866  377126 start.go:140] virtualization: kvm guest
	I0929 11:29:16.311692  377126 out.go:179] * [functional-668607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:29:16.315641  377126 notify.go:220] Checking for updates...
	I0929 11:29:16.315699  377126 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:29:16.317102  377126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:29:16.318509  377126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 11:29:16.319935  377126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 11:29:16.321090  377126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:29:16.322181  377126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:29:16.323860  377126 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:29:16.324564  377126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:29:16.324659  377126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:29:16.342362  377126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39065
	I0929 11:29:16.343067  377126 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:29:16.343825  377126 main.go:141] libmachine: Using API Version  1
	I0929 11:29:16.343846  377126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:29:16.344379  377126 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:29:16.344627  377126 main.go:141] libmachine: (functional-668607) Calling .DriverName
	I0929 11:29:16.344998  377126 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:29:16.345368  377126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:29:16.345421  377126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:29:16.363753  377126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42155
	I0929 11:29:16.364289  377126 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:29:16.364917  377126 main.go:141] libmachine: Using API Version  1
	I0929 11:29:16.364949  377126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:29:16.365451  377126 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:29:16.365669  377126 main.go:141] libmachine: (functional-668607) Calling .DriverName
	I0929 11:29:16.400922  377126 out.go:179] * Using the kvm2 driver based on existing profile
	I0929 11:29:16.402281  377126 start.go:304] selected driver: kvm2
	I0929 11:29:16.402307  377126 start.go:924] validating driver "kvm2" against &{Name:functional-668607 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-668607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:29:16.402500  377126 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:29:16.405162  377126 out.go:203] 
	W0929 11:29:16.406553  377126 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 11:29:16.407772  377126 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668607 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-668607 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (153.44408ms)

                                                
                                                
-- stdout --
	* [functional-668607] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:29:16.144711  377069 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:29:16.144800  377069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:29:16.144804  377069 out.go:374] Setting ErrFile to fd 2...
	I0929 11:29:16.144809  377069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:29:16.145140  377069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 11:29:16.145573  377069 out.go:368] Setting JSON to false
	I0929 11:29:16.146546  377069 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4298,"bootTime":1759141058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:29:16.146615  377069 start.go:140] virtualization: kvm guest
	I0929 11:29:16.148248  377069 out.go:179] * [functional-668607] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:29:16.149537  377069 notify.go:220] Checking for updates...
	I0929 11:29:16.149561  377069 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:29:16.150698  377069 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:29:16.151731  377069 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 11:29:16.152611  377069 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 11:29:16.153657  377069 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:29:16.158244  377069 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:29:16.160253  377069 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:29:16.160875  377069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:29:16.160994  377069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:29:16.180711  377069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I0929 11:29:16.181268  377069 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:29:16.181925  377069 main.go:141] libmachine: Using API Version  1
	I0929 11:29:16.182002  377069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:29:16.182459  377069 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:29:16.182696  377069 main.go:141] libmachine: (functional-668607) Calling .DriverName
	I0929 11:29:16.183006  377069 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:29:16.183326  377069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:29:16.183374  377069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:29:16.198376  377069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0929 11:29:16.198873  377069 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:29:16.199430  377069 main.go:141] libmachine: Using API Version  1
	I0929 11:29:16.199458  377069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:29:16.199885  377069 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:29:16.200138  377069 main.go:141] libmachine: (functional-668607) Calling .DriverName
	I0929 11:29:16.236043  377069 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0929 11:29:16.237206  377069 start.go:304] selected driver: kvm2
	I0929 11:29:16.237232  377069 start.go:924] validating driver "kvm2" against &{Name:functional-668607 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:functional-668607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:29:16.237446  377069 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:29:16.240092  377069 out.go:203] 
	W0929 11:29:16.241513  377069 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:29:16.242673  377069 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-668607 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-668607 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cv2d4" [b6c001ac-500f-4338-b166-835954211f2f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-cv2d4" [b6c001ac-500f-4338-b166-835954211f2f] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.007328214s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.200:32184
functional_test.go:1680: http://192.168.39.200:32184: success! body:
Request served by hello-node-connect-7d85dfc575-cv2d4

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.200:32184
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [60d7cd40-b8d6-4859-ae52-0d09d2403026] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004020123s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-668607 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-668607 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-668607 get pvc myclaim -o=json
I0929 11:29:34.938666  369423 retry.go:31] will retry after 1.183505317s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f0f86d72-e655-49ef-b693-78d5fae82eec ResourceVersion:800 Generation:0 CreationTimestamp:2025-09-29 11:29:34 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001d464c0 VolumeMode:0xc001d464d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-668607 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-668607 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7e314a63-10e5-45f9-bf1d-faf3c330cc03] Pending
helpers_test.go:352: "sp-pod" [7e314a63-10e5-45f9-bf1d-faf3c330cc03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7e314a63-10e5-45f9-bf1d-faf3c330cc03] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.003824202s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-668607 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-668607 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-668607 delete -f testdata/storage-provisioner/pod.yaml: (1.155094454s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-668607 apply -f testdata/storage-provisioner/pod.yaml
I0929 11:30:01.788275  369423 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [26c1bc0a-898d-4c3a-8339-86542ba3dbe0] Pending
helpers_test.go:352: "sp-pod" [26c1bc0a-898d-4c3a-8339-86542ba3dbe0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [26c1bc0a-898d-4c3a-8339-86542ba3dbe0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004726711s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-668607 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh -n functional-668607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cp functional-668607:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2113071440/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh -n functional-668607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh -n functional-668607 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-668607 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-qgn4t" [99afd68e-0b76-4862-b81f-332b92e7b470] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-qgn4t" [99afd68e-0b76-4862-b81f-332b92e7b470] Running
E0929 11:29:46.268902  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.24921664s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-668607 exec mysql-5bb876957f-qgn4t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-668607 exec mysql-5bb876957f-qgn4t -- mysql -ppassword -e "show databases;": exit status 1 (264.400134ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 11:29:50.857680  369423 retry.go:31] will retry after 966.382091ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-668607 exec mysql-5bb876957f-qgn4t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-668607 exec mysql-5bb876957f-qgn4t -- mysql -ppassword -e "show databases;": exit status 1 (145.975976ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 11:29:51.971438  369423 retry.go:31] will retry after 1.354213554s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-668607 exec mysql-5bb876957f-qgn4t -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/369423/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo cat /etc/test/nested/copy/369423/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/369423.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo cat /etc/ssl/certs/369423.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/369423.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo cat /usr/share/ca-certificates/369423.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3694232.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo cat /etc/ssl/certs/3694232.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3694232.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo cat /usr/share/ca-certificates/3694232.pem"
2025/09/29 11:29:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-668607 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh "sudo systemctl is-active docker": exit status 1 (220.034555ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh "sudo systemctl is-active containerd": exit status 1 (222.242098ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-668607 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-668607 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-zgnfp" [8370268d-8d66-42f7-abd7-14becf1601eb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-zgnfp" [8370268d-8d66-42f7-abd7-14becf1601eb] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.006313118s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 image ls --format short --alsologtostderr: (1.258055218s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668607 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-668607
localhost/kicbase/echo-server:functional-668607
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668607 image ls --format short --alsologtostderr:
I0929 11:29:39.303631  379214 out.go:360] Setting OutFile to fd 1 ...
I0929 11:29:39.303758  379214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:39.303769  379214 out.go:374] Setting ErrFile to fd 2...
I0929 11:29:39.303774  379214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:39.303999  379214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
I0929 11:29:39.304659  379214 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:39.304752  379214 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:39.305180  379214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:39.305245  379214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:39.319509  379214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
I0929 11:29:39.320053  379214 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:39.320600  379214 main.go:141] libmachine: Using API Version  1
I0929 11:29:39.320625  379214 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:39.321049  379214 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:39.321286  379214 main.go:141] libmachine: (functional-668607) Calling .GetState
I0929 11:29:39.323429  379214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:39.323483  379214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:39.338799  379214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45925
I0929 11:29:39.339298  379214 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:39.339838  379214 main.go:141] libmachine: Using API Version  1
I0929 11:29:39.339869  379214 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:39.340298  379214 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:39.340526  379214 main.go:141] libmachine: (functional-668607) Calling .DriverName
I0929 11:29:39.340759  379214 ssh_runner.go:195] Run: systemctl --version
I0929 11:29:39.340786  379214 main.go:141] libmachine: (functional-668607) Calling .GetSSHHostname
I0929 11:29:39.344652  379214 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:39.345216  379214 main.go:141] libmachine: (functional-668607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e9:b3", ip: ""} in network mk-functional-668607: {Iface:virbr1 ExpiryTime:2025-09-29 12:26:49 +0000 UTC Type:0 Mac:52:54:00:56:e9:b3 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-668607 Clientid:01:52:54:00:56:e9:b3}
I0929 11:29:39.345263  379214 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined IP address 192.168.39.200 and MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:39.345420  379214 main.go:141] libmachine: (functional-668607) Calling .GetSSHPort
I0929 11:29:39.345661  379214 main.go:141] libmachine: (functional-668607) Calling .GetSSHKeyPath
I0929 11:29:39.345817  379214 main.go:141] libmachine: (functional-668607) Calling .GetSSHUsername
I0929 11:29:39.345992  379214 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/functional-668607/id_rsa Username:docker}
I0929 11:29:39.434965  379214 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 11:29:40.508518  379214 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.073491037s)
I0929 11:29:40.509065  379214 main.go:141] libmachine: Making call to close driver server
I0929 11:29:40.509085  379214 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:40.509407  379214 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:40.509423  379214 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:29:40.509434  379214 main.go:141] libmachine: Making call to close driver server
I0929 11:29:40.509443  379214 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:40.509925  379214 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:40.509955  379214 main.go:141] libmachine: (functional-668607) DBG | Closing plugin on server side
I0929 11:29:40.509991  379214 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668607 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-668607  │ 81cf9031423a5 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ localhost/minikube-local-cache-test     │ functional-668607  │ 78a8b462bbd8f │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-668607  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668607 image ls --format table --alsologtostderr:
I0929 11:29:47.519817  379394 out.go:360] Setting OutFile to fd 1 ...
I0929 11:29:47.520177  379394 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:47.520191  379394 out.go:374] Setting ErrFile to fd 2...
I0929 11:29:47.520199  379394 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:47.520477  379394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
I0929 11:29:47.521239  379394 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:47.521384  379394 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:47.521832  379394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:47.521880  379394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:47.537590  379394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
I0929 11:29:47.538346  379394 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:47.539027  379394 main.go:141] libmachine: Using API Version  1
I0929 11:29:47.539063  379394 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:47.539591  379394 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:47.539879  379394 main.go:141] libmachine: (functional-668607) Calling .GetState
I0929 11:29:47.542268  379394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:47.542325  379394 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:47.564476  379394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46815
I0929 11:29:47.565117  379394 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:47.565807  379394 main.go:141] libmachine: Using API Version  1
I0929 11:29:47.565844  379394 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:47.566242  379394 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:47.566458  379394 main.go:141] libmachine: (functional-668607) Calling .DriverName
I0929 11:29:47.566718  379394 ssh_runner.go:195] Run: systemctl --version
I0929 11:29:47.566764  379394 main.go:141] libmachine: (functional-668607) Calling .GetSSHHostname
I0929 11:29:47.570429  379394 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:47.570960  379394 main.go:141] libmachine: (functional-668607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e9:b3", ip: ""} in network mk-functional-668607: {Iface:virbr1 ExpiryTime:2025-09-29 12:26:49 +0000 UTC Type:0 Mac:52:54:00:56:e9:b3 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-668607 Clientid:01:52:54:00:56:e9:b3}
I0929 11:29:47.571007  379394 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined IP address 192.168.39.200 and MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:47.571134  379394 main.go:141] libmachine: (functional-668607) Calling .GetSSHPort
I0929 11:29:47.571294  379394 main.go:141] libmachine: (functional-668607) Calling .GetSSHKeyPath
I0929 11:29:47.571448  379394 main.go:141] libmachine: (functional-668607) Calling .GetSSHUsername
I0929 11:29:47.571666  379394 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/functional-668607/id_rsa Username:docker}
I0929 11:29:47.694297  379394 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 11:29:47.759117  379394 main.go:141] libmachine: Making call to close driver server
I0929 11:29:47.759134  379394 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:47.759448  379394 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:47.759467  379394 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:29:47.759476  379394 main.go:141] libmachine: Making call to close driver server
I0929 11:29:47.759484  379394 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:47.759722  379394 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:47.759737  379394 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:29:47.759769  379394 main.go:141] libmachine: (functional-668607) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668607 image ls --format json --alsologtostderr:
[{"id":"23f9fb9a0d8e36a79c8f6687002e0dd3e09b224c35c8f22f77392b508ca4401a","repoDigests":["docker.io/library/747550b7c38e1090e162b69ae992353505212971193a15ea2aef17000bce029b-tmp@sha256:16f9824eeee3d22798abc09071b1c801f990983ebdae3d8793253fa3909ce302"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac
6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d63
7f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"78a8b462bbd8f79be89806477be74fb3b08e08f4f0fc83b2dc33a95632049dfb","repoDigests":["localhost/minikube-local-cache-test@sha256:ea0bf56b417d8f9ad3a441bd13352107dc5267f8448baa50645b33bfd765ed7b"],"repoTags":["localhost/minikube-local-cache-test:functional-668607"],"size":"3330"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf
60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea680
1edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-668607"],"size":"4944818"},{"id":"81cf9031423a52e7bb3f8fddee6a95482f4f61febdd38cb5b494d4dcca1097cd","repoDigests":["localhost/my-image@sha256:c7652178286f6579cdc901e56d61e12f5a395dca9a755a3059f96f4ab7e7b54a"],"repoTags":["localhost/my-image:functional-668607"],"size":"1468600"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a26110331
5f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.1
2.1"],"size":"76103547"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"07655ddf2eebe5d25
0f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668607 image ls --format json --alsologtostderr:
I0929 11:29:47.271696  379370 out.go:360] Setting OutFile to fd 1 ...
I0929 11:29:47.271798  379370 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:47.271802  379370 out.go:374] Setting ErrFile to fd 2...
I0929 11:29:47.271806  379370 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:47.271998  379370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
I0929 11:29:47.272638  379370 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:47.272732  379370 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:47.273109  379370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:47.273165  379370 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:47.286842  379370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
I0929 11:29:47.287351  379370 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:47.287923  379370 main.go:141] libmachine: Using API Version  1
I0929 11:29:47.287954  379370 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:47.288332  379370 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:47.288590  379370 main.go:141] libmachine: (functional-668607) Calling .GetState
I0929 11:29:47.290722  379370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:47.290770  379370 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:47.304774  379370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
I0929 11:29:47.305263  379370 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:47.305736  379370 main.go:141] libmachine: Using API Version  1
I0929 11:29:47.305760  379370 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:47.306271  379370 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:47.306527  379370 main.go:141] libmachine: (functional-668607) Calling .DriverName
I0929 11:29:47.306755  379370 ssh_runner.go:195] Run: systemctl --version
I0929 11:29:47.306780  379370 main.go:141] libmachine: (functional-668607) Calling .GetSSHHostname
I0929 11:29:47.310155  379370 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:47.310613  379370 main.go:141] libmachine: (functional-668607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e9:b3", ip: ""} in network mk-functional-668607: {Iface:virbr1 ExpiryTime:2025-09-29 12:26:49 +0000 UTC Type:0 Mac:52:54:00:56:e9:b3 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-668607 Clientid:01:52:54:00:56:e9:b3}
I0929 11:29:47.310656  379370 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined IP address 192.168.39.200 and MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:47.310870  379370 main.go:141] libmachine: (functional-668607) Calling .GetSSHPort
I0929 11:29:47.311080  379370 main.go:141] libmachine: (functional-668607) Calling .GetSSHKeyPath
I0929 11:29:47.311262  379370 main.go:141] libmachine: (functional-668607) Calling .GetSSHUsername
I0929 11:29:47.311389  379370 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/functional-668607/id_rsa Username:docker}
I0929 11:29:47.396191  379370 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 11:29:47.455919  379370 main.go:141] libmachine: Making call to close driver server
I0929 11:29:47.455939  379370 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:47.456245  379370 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:47.456269  379370 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:29:47.456287  379370 main.go:141] libmachine: Making call to close driver server
I0929 11:29:47.456297  379370 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:47.456544  379370 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:47.456570  379370 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668607 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-668607
size: "4944818"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 78a8b462bbd8f79be89806477be74fb3b08e08f4f0fc83b2dc33a95632049dfb
repoDigests:
- localhost/minikube-local-cache-test@sha256:ea0bf56b417d8f9ad3a441bd13352107dc5267f8448baa50645b33bfd765ed7b
repoTags:
- localhost/minikube-local-cache-test:functional-668607
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668607 image ls --format yaml --alsologtostderr:
I0929 11:29:40.578389  379254 out.go:360] Setting OutFile to fd 1 ...
I0929 11:29:40.578660  379254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:40.578669  379254 out.go:374] Setting ErrFile to fd 2...
I0929 11:29:40.578674  379254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:40.578911  379254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
I0929 11:29:40.579589  379254 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:40.579699  379254 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:40.580085  379254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:40.580135  379254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:40.594278  379254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
I0929 11:29:40.594883  379254 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:40.595597  379254 main.go:141] libmachine: Using API Version  1
I0929 11:29:40.595627  379254 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:40.596056  379254 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:40.596315  379254 main.go:141] libmachine: (functional-668607) Calling .GetState
I0929 11:29:40.598311  379254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:40.598368  379254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:40.613101  379254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
I0929 11:29:40.613724  379254 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:40.614396  379254 main.go:141] libmachine: Using API Version  1
I0929 11:29:40.614425  379254 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:40.614894  379254 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:40.615139  379254 main.go:141] libmachine: (functional-668607) Calling .DriverName
I0929 11:29:40.615420  379254 ssh_runner.go:195] Run: systemctl --version
I0929 11:29:40.615456  379254 main.go:141] libmachine: (functional-668607) Calling .GetSSHHostname
I0929 11:29:40.618943  379254 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:40.619499  379254 main.go:141] libmachine: (functional-668607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e9:b3", ip: ""} in network mk-functional-668607: {Iface:virbr1 ExpiryTime:2025-09-29 12:26:49 +0000 UTC Type:0 Mac:52:54:00:56:e9:b3 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-668607 Clientid:01:52:54:00:56:e9:b3}
I0929 11:29:40.619549  379254 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined IP address 192.168.39.200 and MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:40.619763  379254 main.go:141] libmachine: (functional-668607) Calling .GetSSHPort
I0929 11:29:40.619951  379254 main.go:141] libmachine: (functional-668607) Calling .GetSSHKeyPath
I0929 11:29:40.620116  379254 main.go:141] libmachine: (functional-668607) Calling .GetSSHUsername
I0929 11:29:40.620294  379254 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/functional-668607/id_rsa Username:docker}
I0929 11:29:40.710155  379254 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 11:29:40.775544  379254 main.go:141] libmachine: Making call to close driver server
I0929 11:29:40.775560  379254 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:40.775963  379254 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:40.776017  379254 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:29:40.776027  379254 main.go:141] libmachine: Making call to close driver server
I0929 11:29:40.776035  379254 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:40.776035  379254 main.go:141] libmachine: (functional-668607) DBG | Closing plugin on server side
I0929 11:29:40.776322  379254 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:40.776335  379254 main.go:141] libmachine: (functional-668607) DBG | Closing plugin on server side
I0929 11:29:40.776341  379254 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh pgrep buildkitd: exit status 1 (219.808081ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image build -t localhost/my-image:functional-668607 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 image build -t localhost/my-image:functional-668607 testdata/build --alsologtostderr: (5.980224217s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668607 image build -t localhost/my-image:functional-668607 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 23f9fb9a0d8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-668607
--> 81cf9031423
Successfully tagged localhost/my-image:functional-668607
81cf9031423a52e7bb3f8fddee6a95482f4f61febdd38cb5b494d4dcca1097cd
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668607 image build -t localhost/my-image:functional-668607 testdata/build --alsologtostderr:
I0929 11:29:41.054671  379306 out.go:360] Setting OutFile to fd 1 ...
I0929 11:29:41.054964  379306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:41.054993  379306 out.go:374] Setting ErrFile to fd 2...
I0929 11:29:41.055000  379306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:29:41.055247  379306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
I0929 11:29:41.055892  379306 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:41.056698  379306 config.go:182] Loaded profile config "functional-668607": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:29:41.057124  379306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:41.057181  379306 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:41.071413  379306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42599
I0929 11:29:41.072024  379306 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:41.072606  379306 main.go:141] libmachine: Using API Version  1
I0929 11:29:41.072629  379306 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:41.073022  379306 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:41.073221  379306 main.go:141] libmachine: (functional-668607) Calling .GetState
I0929 11:29:41.075387  379306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0929 11:29:41.075430  379306 main.go:141] libmachine: Launching plugin server for driver kvm2
I0929 11:29:41.089776  379306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46787
I0929 11:29:41.090292  379306 main.go:141] libmachine: () Calling .GetVersion
I0929 11:29:41.090748  379306 main.go:141] libmachine: Using API Version  1
I0929 11:29:41.090773  379306 main.go:141] libmachine: () Calling .SetConfigRaw
I0929 11:29:41.091146  379306 main.go:141] libmachine: () Calling .GetMachineName
I0929 11:29:41.091400  379306 main.go:141] libmachine: (functional-668607) Calling .DriverName
I0929 11:29:41.091686  379306 ssh_runner.go:195] Run: systemctl --version
I0929 11:29:41.091724  379306 main.go:141] libmachine: (functional-668607) Calling .GetSSHHostname
I0929 11:29:41.095146  379306 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:41.095627  379306 main.go:141] libmachine: (functional-668607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:e9:b3", ip: ""} in network mk-functional-668607: {Iface:virbr1 ExpiryTime:2025-09-29 12:26:49 +0000 UTC Type:0 Mac:52:54:00:56:e9:b3 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-668607 Clientid:01:52:54:00:56:e9:b3}
I0929 11:29:41.095667  379306 main.go:141] libmachine: (functional-668607) DBG | domain functional-668607 has defined IP address 192.168.39.200 and MAC address 52:54:00:56:e9:b3 in network mk-functional-668607
I0929 11:29:41.095872  379306 main.go:141] libmachine: (functional-668607) Calling .GetSSHPort
I0929 11:29:41.096087  379306 main.go:141] libmachine: (functional-668607) Calling .GetSSHKeyPath
I0929 11:29:41.096266  379306 main.go:141] libmachine: (functional-668607) Calling .GetSSHUsername
I0929 11:29:41.096429  379306 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/functional-668607/id_rsa Username:docker}
I0929 11:29:41.221385  379306 build_images.go:161] Building image from path: /tmp/build.4241893010.tar
I0929 11:29:41.221460  379306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 11:29:41.261886  379306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4241893010.tar
I0929 11:29:41.274478  379306 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4241893010.tar: stat -c "%s %y" /var/lib/minikube/build/build.4241893010.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4241893010.tar': No such file or directory
I0929 11:29:41.274521  379306 ssh_runner.go:362] scp /tmp/build.4241893010.tar --> /var/lib/minikube/build/build.4241893010.tar (3072 bytes)
I0929 11:29:41.336329  379306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4241893010
I0929 11:29:41.356481  379306 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4241893010 -xf /var/lib/minikube/build/build.4241893010.tar
I0929 11:29:41.377323  379306 crio.go:315] Building image: /var/lib/minikube/build/build.4241893010
I0929 11:29:41.377397  379306 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-668607 /var/lib/minikube/build/build.4241893010 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 11:29:46.953326  379306 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-668607 /var/lib/minikube/build/build.4241893010 --cgroup-manager=cgroupfs: (5.575900057s)
I0929 11:29:46.953419  379306 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4241893010
I0929 11:29:46.967913  379306 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4241893010.tar
I0929 11:29:46.979580  379306 build_images.go:217] Built localhost/my-image:functional-668607 from /tmp/build.4241893010.tar
I0929 11:29:46.979630  379306 build_images.go:133] succeeded building to: functional-668607
I0929 11:29:46.979635  379306 build_images.go:134] failed building to: 
I0929 11:29:46.979670  379306 main.go:141] libmachine: Making call to close driver server
I0929 11:29:46.979683  379306 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:46.980153  379306 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:46.980179  379306 main.go:141] libmachine: (functional-668607) DBG | Closing plugin on server side
I0929 11:29:46.980197  379306 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:29:46.980206  379306 main.go:141] libmachine: Making call to close driver server
I0929 11:29:46.980212  379306 main.go:141] libmachine: (functional-668607) Calling .Close
I0929 11:29:46.980538  379306 main.go:141] libmachine: Successfully made call to close driver server
I0929 11:29:46.980553  379306 main.go:141] libmachine: Making call to close connection to plugin binary
I0929 11:29:46.980574  379306 main.go:141] libmachine: (functional-668607) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.721339411s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-668607
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdany-port566440460/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759145357569549103" to /tmp/TestFunctionalparallelMountCmdany-port566440460/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759145357569549103" to /tmp/TestFunctionalparallelMountCmdany-port566440460/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759145357569549103" to /tmp/TestFunctionalparallelMountCmdany-port566440460/001/test-1759145357569549103
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.168249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:29:17.818119  369423 retry.go:31] will retry after 265.000825ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 11:29 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 11:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 11:29 test-1759145357569549103
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh cat /mount-9p/test-1759145357569549103
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-668607 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ad6cb639-1a88-4eb0-a24b-5fe78079006e] Pending
helpers_test.go:352: "busybox-mount" [ad6cb639-1a88-4eb0-a24b-5fe78079006e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ad6cb639-1a88-4eb0-a24b-5fe78079006e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ad6cb639-1a88-4eb0-a24b-5fe78079006e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.008258399s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-668607 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdany-port566440460/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image load --daemon kicbase/echo-server:functional-668607 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 image load --daemon kicbase/echo-server:functional-668607 --alsologtostderr: (1.125307096s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image load --daemon kicbase/echo-server:functional-668607 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-668607
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image load --daemon kicbase/echo-server:functional-668607 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 image load --daemon kicbase/echo-server:functional-668607 --alsologtostderr: (2.529253314s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image save kicbase/echo-server:functional-668607 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 service list -o json
functional_test.go:1504: Took "459.650137ms" to run "out/minikube-linux-amd64 -p functional-668607 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image rm kicbase/echo-server:functional-668607 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.200:30577
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.200:30577
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-668607
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 image save --daemon kicbase/echo-server:functional-668607 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-668607 image save --daemon kicbase/echo-server:functional-668607 --alsologtostderr: (1.075155319s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-668607
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "333.260773ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.572469ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "379.549853ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.958369ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdspecific-port3477611315/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.138664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:29:35.342027  369423 retry.go:31] will retry after 574.256955ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdspecific-port3477611315/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
I0929 11:29:36.366965  369423 detect.go:223] nested VM detected
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh "sudo umount -f /mount-9p": exit status 1 (220.899786ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-668607 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdspecific-port3477611315/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup131737224/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup131737224/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup131737224/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T" /mount1: exit status 1 (260.055204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:29:37.259960  369423 retry.go:31] will retry after 422.058936ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668607 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-668607 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup131737224/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup131737224/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup131737224/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-668607
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-668607
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-668607
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (210.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:30:13.982284  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m29.888491112s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (210.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 kubectl -- rollout status deployment/busybox: (4.93341184s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-2hjwm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-w9d6x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-wgctx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-2hjwm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-w9d6x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-wgctx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-2hjwm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-w9d6x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-wgctx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-2hjwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-2hjwm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-w9d6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-w9d6x -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-wgctx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 kubectl -- exec busybox-7b57f96db7-wgctx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 node add --alsologtostderr -v 5
E0929 11:34:15.918434  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:15.924913  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:15.936314  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:15.957799  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:15.999142  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:16.080692  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:16.242520  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:16.564384  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:17.206412  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:18.487945  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:21.049482  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:26.171251  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 node add --alsologtostderr -v 5: (47.197798144s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
E0929 11:34:36.412956  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-031500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp testdata/cp-test.txt ha-031500:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072431041/001/cp-test_ha-031500.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500:/home/docker/cp-test.txt ha-031500-m02:/home/docker/cp-test_ha-031500_ha-031500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test_ha-031500_ha-031500-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500:/home/docker/cp-test.txt ha-031500-m03:/home/docker/cp-test_ha-031500_ha-031500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test_ha-031500_ha-031500-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500:/home/docker/cp-test.txt ha-031500-m04:/home/docker/cp-test_ha-031500_ha-031500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test_ha-031500_ha-031500-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp testdata/cp-test.txt ha-031500-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072431041/001/cp-test_ha-031500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m02:/home/docker/cp-test.txt ha-031500:/home/docker/cp-test_ha-031500-m02_ha-031500.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test_ha-031500-m02_ha-031500.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m02:/home/docker/cp-test.txt ha-031500-m03:/home/docker/cp-test_ha-031500-m02_ha-031500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test_ha-031500-m02_ha-031500-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m02:/home/docker/cp-test.txt ha-031500-m04:/home/docker/cp-test_ha-031500-m02_ha-031500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test_ha-031500-m02_ha-031500-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp testdata/cp-test.txt ha-031500-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072431041/001/cp-test_ha-031500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m03:/home/docker/cp-test.txt ha-031500:/home/docker/cp-test_ha-031500-m03_ha-031500.txt
E0929 11:34:46.268239  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test_ha-031500-m03_ha-031500.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m03:/home/docker/cp-test.txt ha-031500-m02:/home/docker/cp-test_ha-031500-m03_ha-031500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test_ha-031500-m03_ha-031500-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m03:/home/docker/cp-test.txt ha-031500-m04:/home/docker/cp-test_ha-031500-m03_ha-031500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test_ha-031500-m03_ha-031500-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp testdata/cp-test.txt ha-031500-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072431041/001/cp-test_ha-031500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m04:/home/docker/cp-test.txt ha-031500:/home/docker/cp-test_ha-031500-m04_ha-031500.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500 "sudo cat /home/docker/cp-test_ha-031500-m04_ha-031500.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m04:/home/docker/cp-test.txt ha-031500-m02:/home/docker/cp-test_ha-031500-m04_ha-031500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m02 "sudo cat /home/docker/cp-test_ha-031500-m04_ha-031500-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 cp ha-031500-m04:/home/docker/cp-test.txt ha-031500-m03:/home/docker/cp-test_ha-031500-m04_ha-031500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 ssh -n ha-031500-m03 "sudo cat /home/docker/cp-test_ha-031500-m04_ha-031500-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 node stop m02 --alsologtostderr -v 5
E0929 11:34:56.895119  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:37.856551  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 node stop m02 --alsologtostderr -v 5: (1m28.077590503s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5: exit status 7 (666.731827ms)

                                                
                                                
-- stdout --
	ha-031500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-031500-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-031500-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-031500-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:36:19.679028  384270 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:36:19.679131  384270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:36:19.679136  384270 out.go:374] Setting ErrFile to fd 2...
	I0929 11:36:19.679139  384270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:36:19.679363  384270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 11:36:19.679548  384270 out.go:368] Setting JSON to false
	I0929 11:36:19.679586  384270 mustload.go:65] Loading cluster: ha-031500
	I0929 11:36:19.679652  384270 notify.go:220] Checking for updates...
	I0929 11:36:19.680183  384270 config.go:182] Loaded profile config "ha-031500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:36:19.680216  384270 status.go:174] checking status of ha-031500 ...
	I0929 11:36:19.680768  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:19.680825  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:19.701806  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44939
	I0929 11:36:19.702411  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:19.702956  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:19.702994  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:19.703465  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:19.703689  384270 main.go:141] libmachine: (ha-031500) Calling .GetState
	I0929 11:36:19.705966  384270 status.go:371] ha-031500 host status = "Running" (err=<nil>)
	I0929 11:36:19.705999  384270 host.go:66] Checking if "ha-031500" exists ...
	I0929 11:36:19.706375  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:19.706427  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:19.721234  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0929 11:36:19.721719  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:19.722209  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:19.722237  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:19.722596  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:19.722797  384270 main.go:141] libmachine: (ha-031500) Calling .GetIP
	I0929 11:36:19.726021  384270 main.go:141] libmachine: (ha-031500) DBG | domain ha-031500 has defined MAC address 52:54:00:87:16:b9 in network mk-ha-031500
	I0929 11:36:19.726628  384270 main.go:141] libmachine: (ha-031500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:16:b9", ip: ""} in network mk-ha-031500: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:25 +0000 UTC Type:0 Mac:52:54:00:87:16:b9 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-031500 Clientid:01:52:54:00:87:16:b9}
	I0929 11:36:19.726662  384270 main.go:141] libmachine: (ha-031500) DBG | domain ha-031500 has defined IP address 192.168.39.106 and MAC address 52:54:00:87:16:b9 in network mk-ha-031500
	I0929 11:36:19.726822  384270 host.go:66] Checking if "ha-031500" exists ...
	I0929 11:36:19.727144  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:19.727183  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:19.741650  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0929 11:36:19.742210  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:19.742703  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:19.742729  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:19.743206  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:19.743441  384270 main.go:141] libmachine: (ha-031500) Calling .DriverName
	I0929 11:36:19.743667  384270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:36:19.743698  384270 main.go:141] libmachine: (ha-031500) Calling .GetSSHHostname
	I0929 11:36:19.747012  384270 main.go:141] libmachine: (ha-031500) DBG | domain ha-031500 has defined MAC address 52:54:00:87:16:b9 in network mk-ha-031500
	I0929 11:36:19.747510  384270 main.go:141] libmachine: (ha-031500) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:16:b9", ip: ""} in network mk-ha-031500: {Iface:virbr1 ExpiryTime:2025-09-29 12:30:25 +0000 UTC Type:0 Mac:52:54:00:87:16:b9 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:ha-031500 Clientid:01:52:54:00:87:16:b9}
	I0929 11:36:19.747544  384270 main.go:141] libmachine: (ha-031500) DBG | domain ha-031500 has defined IP address 192.168.39.106 and MAC address 52:54:00:87:16:b9 in network mk-ha-031500
	I0929 11:36:19.747733  384270 main.go:141] libmachine: (ha-031500) Calling .GetSSHPort
	I0929 11:36:19.747918  384270 main.go:141] libmachine: (ha-031500) Calling .GetSSHKeyPath
	I0929 11:36:19.748095  384270 main.go:141] libmachine: (ha-031500) Calling .GetSSHUsername
	I0929 11:36:19.748267  384270 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/ha-031500/id_rsa Username:docker}
	I0929 11:36:19.835853  384270 ssh_runner.go:195] Run: systemctl --version
	I0929 11:36:19.843136  384270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:36:19.861932  384270 kubeconfig.go:125] found "ha-031500" server: "https://192.168.39.254:8443"
	I0929 11:36:19.861999  384270 api_server.go:166] Checking apiserver status ...
	I0929 11:36:19.862057  384270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:19.883090  384270 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	W0929 11:36:19.900067  384270 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:36:19.900127  384270 ssh_runner.go:195] Run: ls
	I0929 11:36:19.905683  384270 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 11:36:19.911144  384270 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 11:36:19.911170  384270 status.go:463] ha-031500 apiserver status = Running (err=<nil>)
	I0929 11:36:19.911181  384270 status.go:176] ha-031500 status: &{Name:ha-031500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:36:19.911199  384270 status.go:174] checking status of ha-031500-m02 ...
	I0929 11:36:19.911495  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:19.911531  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:19.925987  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0929 11:36:19.926422  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:19.927011  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:19.927039  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:19.927414  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:19.927644  384270 main.go:141] libmachine: (ha-031500-m02) Calling .GetState
	I0929 11:36:19.929652  384270 status.go:371] ha-031500-m02 host status = "Stopped" (err=<nil>)
	I0929 11:36:19.929668  384270 status.go:384] host is not running, skipping remaining checks
	I0929 11:36:19.929674  384270 status.go:176] ha-031500-m02 status: &{Name:ha-031500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:36:19.929695  384270 status.go:174] checking status of ha-031500-m03 ...
	I0929 11:36:19.930062  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:19.930114  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:19.944081  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0929 11:36:19.944567  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:19.945127  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:19.945157  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:19.945633  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:19.945852  384270 main.go:141] libmachine: (ha-031500-m03) Calling .GetState
	I0929 11:36:19.947847  384270 status.go:371] ha-031500-m03 host status = "Running" (err=<nil>)
	I0929 11:36:19.947870  384270 host.go:66] Checking if "ha-031500-m03" exists ...
	I0929 11:36:19.948207  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:19.948267  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:19.962321  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0929 11:36:19.962826  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:19.963341  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:19.963365  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:19.963881  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:19.964141  384270 main.go:141] libmachine: (ha-031500-m03) Calling .GetIP
	I0929 11:36:19.967312  384270 main.go:141] libmachine: (ha-031500-m03) DBG | domain ha-031500-m03 has defined MAC address 52:54:00:e2:c8:ab in network mk-ha-031500
	I0929 11:36:19.967773  384270 main.go:141] libmachine: (ha-031500-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:c8:ab", ip: ""} in network mk-ha-031500: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:26 +0000 UTC Type:0 Mac:52:54:00:e2:c8:ab Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-031500-m03 Clientid:01:52:54:00:e2:c8:ab}
	I0929 11:36:19.967822  384270 main.go:141] libmachine: (ha-031500-m03) DBG | domain ha-031500-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:e2:c8:ab in network mk-ha-031500
	I0929 11:36:19.968028  384270 host.go:66] Checking if "ha-031500-m03" exists ...
	I0929 11:36:19.968324  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:19.968365  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:19.982820  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0929 11:36:19.983453  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:19.984056  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:19.984092  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:19.984436  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:19.984604  384270 main.go:141] libmachine: (ha-031500-m03) Calling .DriverName
	I0929 11:36:19.984815  384270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:36:19.984848  384270 main.go:141] libmachine: (ha-031500-m03) Calling .GetSSHHostname
	I0929 11:36:19.987966  384270 main.go:141] libmachine: (ha-031500-m03) DBG | domain ha-031500-m03 has defined MAC address 52:54:00:e2:c8:ab in network mk-ha-031500
	I0929 11:36:19.988429  384270 main.go:141] libmachine: (ha-031500-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:c8:ab", ip: ""} in network mk-ha-031500: {Iface:virbr1 ExpiryTime:2025-09-29 12:32:26 +0000 UTC Type:0 Mac:52:54:00:e2:c8:ab Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-031500-m03 Clientid:01:52:54:00:e2:c8:ab}
	I0929 11:36:19.988449  384270 main.go:141] libmachine: (ha-031500-m03) DBG | domain ha-031500-m03 has defined IP address 192.168.39.105 and MAC address 52:54:00:e2:c8:ab in network mk-ha-031500
	I0929 11:36:19.988645  384270 main.go:141] libmachine: (ha-031500-m03) Calling .GetSSHPort
	I0929 11:36:19.988836  384270 main.go:141] libmachine: (ha-031500-m03) Calling .GetSSHKeyPath
	I0929 11:36:19.989018  384270 main.go:141] libmachine: (ha-031500-m03) Calling .GetSSHUsername
	I0929 11:36:19.989199  384270 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/ha-031500-m03/id_rsa Username:docker}
	I0929 11:36:20.067898  384270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:36:20.086321  384270 kubeconfig.go:125] found "ha-031500" server: "https://192.168.39.254:8443"
	I0929 11:36:20.086351  384270 api_server.go:166] Checking apiserver status ...
	I0929 11:36:20.086389  384270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:36:20.107247  384270 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1778/cgroup
	W0929 11:36:20.118832  384270 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1778/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:36:20.118892  384270 ssh_runner.go:195] Run: ls
	I0929 11:36:20.124337  384270 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0929 11:36:20.129308  384270 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0929 11:36:20.129352  384270 status.go:463] ha-031500-m03 apiserver status = Running (err=<nil>)
	I0929 11:36:20.129365  384270 status.go:176] ha-031500-m03 status: &{Name:ha-031500-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:36:20.129387  384270 status.go:174] checking status of ha-031500-m04 ...
	I0929 11:36:20.129701  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:20.129751  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:20.143740  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I0929 11:36:20.144212  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:20.144648  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:20.144674  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:20.145061  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:20.145247  384270 main.go:141] libmachine: (ha-031500-m04) Calling .GetState
	I0929 11:36:20.147165  384270 status.go:371] ha-031500-m04 host status = "Running" (err=<nil>)
	I0929 11:36:20.147184  384270 host.go:66] Checking if "ha-031500-m04" exists ...
	I0929 11:36:20.147694  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:20.147778  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:20.162511  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39473
	I0929 11:36:20.163061  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:20.163602  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:20.163631  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:20.164005  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:20.164191  384270 main.go:141] libmachine: (ha-031500-m04) Calling .GetIP
	I0929 11:36:20.167358  384270 main.go:141] libmachine: (ha-031500-m04) DBG | domain ha-031500-m04 has defined MAC address 52:54:00:f3:c0:92 in network mk-ha-031500
	I0929 11:36:20.167802  384270 main.go:141] libmachine: (ha-031500-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:c0:92", ip: ""} in network mk-ha-031500: {Iface:virbr1 ExpiryTime:2025-09-29 12:34:04 +0000 UTC Type:0 Mac:52:54:00:f3:c0:92 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-031500-m04 Clientid:01:52:54:00:f3:c0:92}
	I0929 11:36:20.167831  384270 main.go:141] libmachine: (ha-031500-m04) DBG | domain ha-031500-m04 has defined IP address 192.168.39.141 and MAC address 52:54:00:f3:c0:92 in network mk-ha-031500
	I0929 11:36:20.168039  384270 host.go:66] Checking if "ha-031500-m04" exists ...
	I0929 11:36:20.168528  384270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:36:20.168597  384270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:36:20.182801  384270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42155
	I0929 11:36:20.183337  384270 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:36:20.183872  384270 main.go:141] libmachine: Using API Version  1
	I0929 11:36:20.183903  384270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:36:20.184247  384270 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:36:20.184425  384270 main.go:141] libmachine: (ha-031500-m04) Calling .DriverName
	I0929 11:36:20.184634  384270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:36:20.184661  384270 main.go:141] libmachine: (ha-031500-m04) Calling .GetSSHHostname
	I0929 11:36:20.187659  384270 main.go:141] libmachine: (ha-031500-m04) DBG | domain ha-031500-m04 has defined MAC address 52:54:00:f3:c0:92 in network mk-ha-031500
	I0929 11:36:20.188126  384270 main.go:141] libmachine: (ha-031500-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:c0:92", ip: ""} in network mk-ha-031500: {Iface:virbr1 ExpiryTime:2025-09-29 12:34:04 +0000 UTC Type:0 Mac:52:54:00:f3:c0:92 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:ha-031500-m04 Clientid:01:52:54:00:f3:c0:92}
	I0929 11:36:20.188154  384270 main.go:141] libmachine: (ha-031500-m04) DBG | domain ha-031500-m04 has defined IP address 192.168.39.141 and MAC address 52:54:00:f3:c0:92 in network mk-ha-031500
	I0929 11:36:20.188336  384270 main.go:141] libmachine: (ha-031500-m04) Calling .GetSSHPort
	I0929 11:36:20.188490  384270 main.go:141] libmachine: (ha-031500-m04) Calling .GetSSHKeyPath
	I0929 11:36:20.188630  384270 main.go:141] libmachine: (ha-031500-m04) Calling .GetSSHUsername
	I0929 11:36:20.188819  384270 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/ha-031500-m04/id_rsa Username:docker}
	I0929 11:36:20.275370  384270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:36:20.293031  384270 status.go:176] ha-031500-m04 status: &{Name:ha-031500-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 node start m02 --alsologtostderr -v 5: (32.149512131s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5: (1.081090324s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 stop --alsologtostderr -v 5
E0929 11:36:59.778801  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:39:15.919159  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:39:43.620904  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:39:46.268930  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:09.346208  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 stop --alsologtostderr -v 5: (4m16.560514767s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 start --wait true --alsologtostderr -v 5: (1m58.106903401s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 node delete m03 --alsologtostderr -v 5: (17.619565857s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (254.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 stop --alsologtostderr -v 5
E0929 11:44:15.919225  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:44:46.268500  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 stop --alsologtostderr -v 5: (4m13.910549679s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5: exit status 7 (113.839052ms)

                                                
                                                
-- stdout --
	ha-031500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-031500-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-031500-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:47:43.146522  388176 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:47:43.146778  388176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:47:43.146786  388176 out.go:374] Setting ErrFile to fd 2...
	I0929 11:47:43.146790  388176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:47:43.147033  388176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 11:47:43.147218  388176 out.go:368] Setting JSON to false
	I0929 11:47:43.147252  388176 mustload.go:65] Loading cluster: ha-031500
	I0929 11:47:43.147363  388176 notify.go:220] Checking for updates...
	I0929 11:47:43.147652  388176 config.go:182] Loaded profile config "ha-031500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:47:43.147671  388176 status.go:174] checking status of ha-031500 ...
	I0929 11:47:43.148106  388176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:47:43.148146  388176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:47:43.172122  388176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0929 11:47:43.172683  388176 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:47:43.173609  388176 main.go:141] libmachine: Using API Version  1
	I0929 11:47:43.173652  388176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:47:43.174132  388176 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:47:43.174373  388176 main.go:141] libmachine: (ha-031500) Calling .GetState
	I0929 11:47:43.176162  388176 status.go:371] ha-031500 host status = "Stopped" (err=<nil>)
	I0929 11:47:43.176180  388176 status.go:384] host is not running, skipping remaining checks
	I0929 11:47:43.176187  388176 status.go:176] ha-031500 status: &{Name:ha-031500 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:47:43.176232  388176 status.go:174] checking status of ha-031500-m02 ...
	I0929 11:47:43.176516  388176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:47:43.176560  388176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:47:43.190117  388176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0929 11:47:43.190642  388176 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:47:43.191178  388176 main.go:141] libmachine: Using API Version  1
	I0929 11:47:43.191201  388176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:47:43.191546  388176 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:47:43.191779  388176 main.go:141] libmachine: (ha-031500-m02) Calling .GetState
	I0929 11:47:43.193698  388176 status.go:371] ha-031500-m02 host status = "Stopped" (err=<nil>)
	I0929 11:47:43.193715  388176 status.go:384] host is not running, skipping remaining checks
	I0929 11:47:43.193722  388176 status.go:176] ha-031500-m02 status: &{Name:ha-031500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:47:43.193745  388176 status.go:174] checking status of ha-031500-m04 ...
	I0929 11:47:43.194061  388176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:47:43.194115  388176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:47:43.207638  388176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36519
	I0929 11:47:43.208174  388176 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:47:43.208688  388176 main.go:141] libmachine: Using API Version  1
	I0929 11:47:43.208710  388176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:47:43.209058  388176 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:47:43.209265  388176 main.go:141] libmachine: (ha-031500-m04) Calling .GetState
	I0929 11:47:43.210808  388176 status.go:371] ha-031500-m04 host status = "Stopped" (err=<nil>)
	I0929 11:47:43.210821  388176 status.go:384] host is not running, skipping remaining checks
	I0929 11:47:43.210826  388176 status.go:176] ha-031500-m04 status: &{Name:ha-031500-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (254.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (102.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:49:15.919114  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.214419634s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (102.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 node add --control-plane --alsologtostderr -v 5
E0929 11:49:46.268870  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:50:38.987166  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-031500 node add --control-plane --alsologtostderr -v 5: (1m12.859855271s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-031500 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-649675 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-649675 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.591338031s)
--- PASS: TestJSONOutput/start/Command (79.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-649675 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-649675 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-649675 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-649675 --output=json --user=testUser: (7.211356554s)
--- PASS: TestJSONOutput/stop/Command (7.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-610852 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-610852 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.645511ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48f2f347-0070-4ebb-af48-61ca0e0a115e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-610852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7787c2e-b7c1-46cd-a589-d45ca2398f0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21655"}}
	{"specversion":"1.0","id":"adb7adb6-15f4-47f9-9cd2-60814b373ec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2437a914-59d3-4bc4-8a44-21d7d213a7ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig"}}
	{"specversion":"1.0","id":"39ad5e37-15f3-4305-8544-e97b1df5d089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube"}}
	{"specversion":"1.0","id":"91521723-2960-4786-ae67-7eb15512fa86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d7fca284-c87b-41e6-a1f6-1da6c582eea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bc3686c3-860b-4823-a15e-2d9d28f0a7ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-610852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-610852
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (80.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-273731 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-273731 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.082045531s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-286720 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-286720 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.964380625s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-273731
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-286720
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-286720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-286720
helpers_test.go:175: Cleaning up "first-273731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-273731
--- PASS: TestMinikubeProfile (80.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-806835 --memory=3072 --mount-string /tmp/TestMountStartserial1351395164/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-806835 --memory=3072 --mount-string /tmp/TestMountStartserial1351395164/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.647623726s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-806835 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-806835 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-821675 --memory=3072 --mount-string /tmp/TestMountStartserial1351395164/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-821675 --memory=3072 --mount-string /tmp/TestMountStartserial1351395164/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.773708864s)
E0929 11:54:15.918281  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountSecond (20.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-821675 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-821675 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-806835 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-821675 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-821675 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-821675
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-821675: (1.358618743s)
--- PASS: TestMountStart/serial/Stop (1.36s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-821675
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-821675: (18.729595023s)
--- PASS: TestMountStart/serial/RestartStopped (19.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-821675 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-821675 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404520 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 11:54:46.268707  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404520 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m8.938296563s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-404520 -- rollout status deployment/busybox: (4.520315915s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-lrps9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-n2k45 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-lrps9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-n2k45 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-lrps9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-n2k45 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-lrps9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-lrps9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-n2k45 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-404520 -- exec busybox-7b57f96db7-n2k45 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-404520 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-404520 -v=5 --alsologtostderr: (45.15245823s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-404520 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp testdata/cp-test.txt multinode-404520:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1532265375/001/cp-test_multinode-404520.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520:/home/docker/cp-test.txt multinode-404520-m02:/home/docker/cp-test_multinode-404520_multinode-404520-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m02 "sudo cat /home/docker/cp-test_multinode-404520_multinode-404520-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520:/home/docker/cp-test.txt multinode-404520-m03:/home/docker/cp-test_multinode-404520_multinode-404520-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m03 "sudo cat /home/docker/cp-test_multinode-404520_multinode-404520-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp testdata/cp-test.txt multinode-404520-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1532265375/001/cp-test_multinode-404520-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520-m02:/home/docker/cp-test.txt multinode-404520:/home/docker/cp-test_multinode-404520-m02_multinode-404520.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520 "sudo cat /home/docker/cp-test_multinode-404520-m02_multinode-404520.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520-m02:/home/docker/cp-test.txt multinode-404520-m03:/home/docker/cp-test_multinode-404520-m02_multinode-404520-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m03 "sudo cat /home/docker/cp-test_multinode-404520-m02_multinode-404520-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp testdata/cp-test.txt multinode-404520-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1532265375/001/cp-test_multinode-404520-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520-m03:/home/docker/cp-test.txt multinode-404520:/home/docker/cp-test_multinode-404520-m03_multinode-404520.txt
E0929 11:57:49.348555  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520 "sudo cat /home/docker/cp-test_multinode-404520-m03_multinode-404520.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 cp multinode-404520-m03:/home/docker/cp-test.txt multinode-404520-m02:/home/docker/cp-test_multinode-404520-m03_multinode-404520-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 ssh -n multinode-404520-m02 "sudo cat /home/docker/cp-test_multinode-404520-m03_multinode-404520-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-404520 node stop m03: (1.549983053s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404520 status: exit status 7 (441.637405ms)

                                                
                                                
-- stdout --
	multinode-404520
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-404520-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-404520-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr: exit status 7 (445.368191ms)

                                                
                                                
-- stdout --
	multinode-404520
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-404520-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-404520-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:57:52.673711  395866 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:57:52.673961  395866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:57:52.673969  395866 out.go:374] Setting ErrFile to fd 2...
	I0929 11:57:52.674001  395866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:57:52.674227  395866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 11:57:52.674422  395866 out.go:368] Setting JSON to false
	I0929 11:57:52.674456  395866 mustload.go:65] Loading cluster: multinode-404520
	I0929 11:57:52.674577  395866 notify.go:220] Checking for updates...
	I0929 11:57:52.674886  395866 config.go:182] Loaded profile config "multinode-404520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:57:52.674907  395866 status.go:174] checking status of multinode-404520 ...
	I0929 11:57:52.675440  395866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:57:52.675491  395866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:57:52.690685  395866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0929 11:57:52.691243  395866 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:57:52.691877  395866 main.go:141] libmachine: Using API Version  1
	I0929 11:57:52.691900  395866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:57:52.692410  395866 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:57:52.692713  395866 main.go:141] libmachine: (multinode-404520) Calling .GetState
	I0929 11:57:52.694643  395866 status.go:371] multinode-404520 host status = "Running" (err=<nil>)
	I0929 11:57:52.694663  395866 host.go:66] Checking if "multinode-404520" exists ...
	I0929 11:57:52.695017  395866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:57:52.695083  395866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:57:52.709901  395866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
	I0929 11:57:52.710396  395866 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:57:52.710883  395866 main.go:141] libmachine: Using API Version  1
	I0929 11:57:52.710926  395866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:57:52.711355  395866 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:57:52.711578  395866 main.go:141] libmachine: (multinode-404520) Calling .GetIP
	I0929 11:57:52.714896  395866 main.go:141] libmachine: (multinode-404520) DBG | domain multinode-404520 has defined MAC address 52:54:00:fe:49:57 in network mk-multinode-404520
	I0929 11:57:52.715338  395866 main.go:141] libmachine: (multinode-404520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:49:57", ip: ""} in network mk-multinode-404520: {Iface:virbr1 ExpiryTime:2025-09-29 12:54:55 +0000 UTC Type:0 Mac:52:54:00:fe:49:57 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-404520 Clientid:01:52:54:00:fe:49:57}
	I0929 11:57:52.715363  395866 main.go:141] libmachine: (multinode-404520) DBG | domain multinode-404520 has defined IP address 192.168.39.68 and MAC address 52:54:00:fe:49:57 in network mk-multinode-404520
	I0929 11:57:52.715664  395866 host.go:66] Checking if "multinode-404520" exists ...
	I0929 11:57:52.716002  395866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:57:52.716060  395866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:57:52.731104  395866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0929 11:57:52.731635  395866 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:57:52.732198  395866 main.go:141] libmachine: Using API Version  1
	I0929 11:57:52.732229  395866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:57:52.732611  395866 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:57:52.732857  395866 main.go:141] libmachine: (multinode-404520) Calling .DriverName
	I0929 11:57:52.733189  395866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:57:52.733225  395866 main.go:141] libmachine: (multinode-404520) Calling .GetSSHHostname
	I0929 11:57:52.737050  395866 main.go:141] libmachine: (multinode-404520) DBG | domain multinode-404520 has defined MAC address 52:54:00:fe:49:57 in network mk-multinode-404520
	I0929 11:57:52.737532  395866 main.go:141] libmachine: (multinode-404520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:49:57", ip: ""} in network mk-multinode-404520: {Iface:virbr1 ExpiryTime:2025-09-29 12:54:55 +0000 UTC Type:0 Mac:52:54:00:fe:49:57 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-404520 Clientid:01:52:54:00:fe:49:57}
	I0929 11:57:52.737568  395866 main.go:141] libmachine: (multinode-404520) DBG | domain multinode-404520 has defined IP address 192.168.39.68 and MAC address 52:54:00:fe:49:57 in network mk-multinode-404520
	I0929 11:57:52.737818  395866 main.go:141] libmachine: (multinode-404520) Calling .GetSSHPort
	I0929 11:57:52.738051  395866 main.go:141] libmachine: (multinode-404520) Calling .GetSSHKeyPath
	I0929 11:57:52.738228  395866 main.go:141] libmachine: (multinode-404520) Calling .GetSSHUsername
	I0929 11:57:52.738421  395866 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/multinode-404520/id_rsa Username:docker}
	I0929 11:57:52.825391  395866 ssh_runner.go:195] Run: systemctl --version
	I0929 11:57:52.833147  395866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:57:52.851465  395866 kubeconfig.go:125] found "multinode-404520" server: "https://192.168.39.68:8443"
	I0929 11:57:52.851509  395866 api_server.go:166] Checking apiserver status ...
	I0929 11:57:52.851562  395866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:57:52.870155  395866 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup
	W0929 11:57:52.881964  395866 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:57:52.882073  395866 ssh_runner.go:195] Run: ls
	I0929 11:57:52.887334  395866 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0929 11:57:52.892054  395866 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0929 11:57:52.892088  395866 status.go:463] multinode-404520 apiserver status = Running (err=<nil>)
	I0929 11:57:52.892102  395866 status.go:176] multinode-404520 status: &{Name:multinode-404520 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:57:52.892146  395866 status.go:174] checking status of multinode-404520-m02 ...
	I0929 11:57:52.892473  395866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:57:52.892533  395866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:57:52.907159  395866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0929 11:57:52.907654  395866 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:57:52.908126  395866 main.go:141] libmachine: Using API Version  1
	I0929 11:57:52.908148  395866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:57:52.908495  395866 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:57:52.908728  395866 main.go:141] libmachine: (multinode-404520-m02) Calling .GetState
	I0929 11:57:52.910996  395866 status.go:371] multinode-404520-m02 host status = "Running" (err=<nil>)
	I0929 11:57:52.911018  395866 host.go:66] Checking if "multinode-404520-m02" exists ...
	I0929 11:57:52.911343  395866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:57:52.911387  395866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:57:52.927135  395866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0929 11:57:52.927614  395866 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:57:52.928180  395866 main.go:141] libmachine: Using API Version  1
	I0929 11:57:52.928207  395866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:57:52.928586  395866 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:57:52.928850  395866 main.go:141] libmachine: (multinode-404520-m02) Calling .GetIP
	I0929 11:57:52.932102  395866 main.go:141] libmachine: (multinode-404520-m02) DBG | domain multinode-404520-m02 has defined MAC address 52:54:00:86:aa:b9 in network mk-multinode-404520
	I0929 11:57:52.932571  395866 main.go:141] libmachine: (multinode-404520-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:b9", ip: ""} in network mk-multinode-404520: {Iface:virbr1 ExpiryTime:2025-09-29 12:56:20 +0000 UTC Type:0 Mac:52:54:00:86:aa:b9 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-404520-m02 Clientid:01:52:54:00:86:aa:b9}
	I0929 11:57:52.932602  395866 main.go:141] libmachine: (multinode-404520-m02) DBG | domain multinode-404520-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:86:aa:b9 in network mk-multinode-404520
	I0929 11:57:52.932754  395866 host.go:66] Checking if "multinode-404520-m02" exists ...
	I0929 11:57:52.933081  395866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:57:52.933125  395866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:57:52.947795  395866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33813
	I0929 11:57:52.948295  395866 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:57:52.948765  395866 main.go:141] libmachine: Using API Version  1
	I0929 11:57:52.948798  395866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:57:52.949204  395866 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:57:52.949414  395866 main.go:141] libmachine: (multinode-404520-m02) Calling .DriverName
	I0929 11:57:52.949630  395866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:57:52.949654  395866 main.go:141] libmachine: (multinode-404520-m02) Calling .GetSSHHostname
	I0929 11:57:52.953200  395866 main.go:141] libmachine: (multinode-404520-m02) DBG | domain multinode-404520-m02 has defined MAC address 52:54:00:86:aa:b9 in network mk-multinode-404520
	I0929 11:57:52.953672  395866 main.go:141] libmachine: (multinode-404520-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:b9", ip: ""} in network mk-multinode-404520: {Iface:virbr1 ExpiryTime:2025-09-29 12:56:20 +0000 UTC Type:0 Mac:52:54:00:86:aa:b9 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-404520-m02 Clientid:01:52:54:00:86:aa:b9}
	I0929 11:57:52.953711  395866 main.go:141] libmachine: (multinode-404520-m02) DBG | domain multinode-404520-m02 has defined IP address 192.168.39.17 and MAC address 52:54:00:86:aa:b9 in network mk-multinode-404520
	I0929 11:57:52.953965  395866 main.go:141] libmachine: (multinode-404520-m02) Calling .GetSSHPort
	I0929 11:57:52.954183  395866 main.go:141] libmachine: (multinode-404520-m02) Calling .GetSSHKeyPath
	I0929 11:57:52.954383  395866 main.go:141] libmachine: (multinode-404520-m02) Calling .GetSSHUsername
	I0929 11:57:52.954602  395866 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21655-365455/.minikube/machines/multinode-404520-m02/id_rsa Username:docker}
	I0929 11:57:53.034827  395866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:57:53.051309  395866 status.go:176] multinode-404520-m02 status: &{Name:multinode-404520-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:57:53.051349  395866 status.go:174] checking status of multinode-404520-m03 ...
	I0929 11:57:53.051791  395866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 11:57:53.051855  395866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 11:57:53.066662  395866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0929 11:57:53.067259  395866 main.go:141] libmachine: () Calling .GetVersion
	I0929 11:57:53.067745  395866 main.go:141] libmachine: Using API Version  1
	I0929 11:57:53.067769  395866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 11:57:53.068155  395866 main.go:141] libmachine: () Calling .GetMachineName
	I0929 11:57:53.068346  395866 main.go:141] libmachine: (multinode-404520-m03) Calling .GetState
	I0929 11:57:53.070025  395866 status.go:371] multinode-404520-m03 host status = "Stopped" (err=<nil>)
	I0929 11:57:53.070046  395866 status.go:384] host is not running, skipping remaining checks
	I0929 11:57:53.070054  395866 status.go:176] multinode-404520-m03 status: &{Name:multinode-404520-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-404520 node start m03 -v=5 --alsologtostderr: (37.511319577s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (339.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-404520
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-404520
E0929 11:59:15.927399  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:46.268643  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-404520: (2m56.101982817s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404520 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404520 --wait=true -v=5 --alsologtostderr: (2m43.511031836s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-404520
--- PASS: TestMultiNode/serial/RestartKeepsNodes (339.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-404520 node delete m03: (2.120868074s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (169.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 stop
E0929 12:04:15.919853  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:04:46.268592  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-404520 stop: (2m49.250719554s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404520 status: exit status 7 (96.463192ms)

                                                
                                                
-- stdout --
	multinode-404520
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-404520-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr: exit status 7 (81.403586ms)

                                                
                                                
-- stdout --
	multinode-404520
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-404520-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:07:03.018714  398831 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:07:03.019045  398831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:07:03.019059  398831 out.go:374] Setting ErrFile to fd 2...
	I0929 12:07:03.019067  398831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:07:03.019658  398831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 12:07:03.020073  398831 out.go:368] Setting JSON to false
	I0929 12:07:03.020206  398831 mustload.go:65] Loading cluster: multinode-404520
	I0929 12:07:03.020285  398831 notify.go:220] Checking for updates...
	I0929 12:07:03.020698  398831 config.go:182] Loaded profile config "multinode-404520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:07:03.020721  398831 status.go:174] checking status of multinode-404520 ...
	I0929 12:07:03.021218  398831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:07:03.021257  398831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:07:03.034676  398831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0929 12:07:03.035211  398831 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:07:03.035807  398831 main.go:141] libmachine: Using API Version  1
	I0929 12:07:03.035837  398831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:07:03.036227  398831 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:07:03.036464  398831 main.go:141] libmachine: (multinode-404520) Calling .GetState
	I0929 12:07:03.038261  398831 status.go:371] multinode-404520 host status = "Stopped" (err=<nil>)
	I0929 12:07:03.038279  398831 status.go:384] host is not running, skipping remaining checks
	I0929 12:07:03.038287  398831 status.go:176] multinode-404520 status: &{Name:multinode-404520 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:07:03.038333  398831 status.go:174] checking status of multinode-404520-m02 ...
	I0929 12:07:03.038627  398831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0929 12:07:03.038669  398831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0929 12:07:03.051958  398831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0929 12:07:03.052421  398831 main.go:141] libmachine: () Calling .GetVersion
	I0929 12:07:03.052874  398831 main.go:141] libmachine: Using API Version  1
	I0929 12:07:03.052907  398831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0929 12:07:03.053259  398831 main.go:141] libmachine: () Calling .GetMachineName
	I0929 12:07:03.053451  398831 main.go:141] libmachine: (multinode-404520-m02) Calling .GetState
	I0929 12:07:03.055354  398831 status.go:371] multinode-404520-m02 host status = "Stopped" (err=<nil>)
	I0929 12:07:03.055368  398831 status.go:384] host is not running, skipping remaining checks
	I0929 12:07:03.055386  398831 status.go:176] multinode-404520-m02 status: &{Name:multinode-404520-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (169.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404520 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 12:07:18.989083  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404520 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.888658611s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-404520 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-404520
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404520-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-404520-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (63.420254ms)

                                                
                                                
-- stdout --
	* [multinode-404520-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-404520-m02' is duplicated with machine name 'multinode-404520-m02' in profile 'multinode-404520'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-404520-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-404520-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.589620915s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-404520
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-404520: exit status 80 (225.159368ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-404520 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-404520-m03 already exists in multinode-404520-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-404520-m03
E0929 12:09:15.918884  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.77s)

                                                
                                    
x
+
TestScheduledStopUnix (108.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-175575 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-175575 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.185492705s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-175575 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-175575 -n scheduled-stop-175575
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-175575 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 12:12:09.214517  369423 retry.go:31] will retry after 79.873µs: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.215671  369423 retry.go:31] will retry after 75.205µs: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.216869  369423 retry.go:31] will retry after 325.94µs: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.218052  369423 retry.go:31] will retry after 247.236µs: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.219216  369423 retry.go:31] will retry after 457µs: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.220342  369423 retry.go:31] will retry after 841.843µs: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.221488  369423 retry.go:31] will retry after 1.634312ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.223673  369423 retry.go:31] will retry after 1.678751ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.225901  369423 retry.go:31] will retry after 2.247253ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.229122  369423 retry.go:31] will retry after 4.998733ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.234348  369423 retry.go:31] will retry after 5.271595ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.240581  369423 retry.go:31] will retry after 8.839343ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.249860  369423 retry.go:31] will retry after 14.1659ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.265128  369423 retry.go:31] will retry after 24.094833ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
I0929 12:12:09.289356  369423 retry.go:31] will retry after 37.910033ms: open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/scheduled-stop-175575/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-175575 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-175575 -n scheduled-stop-175575
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-175575
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-175575 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-175575
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-175575: exit status 7 (78.438893ms)

                                                
                                                
-- stdout --
	scheduled-stop-175575
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-175575 -n scheduled-stop-175575
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-175575 -n scheduled-stop-175575: exit status 7 (72.85336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-175575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-175575
--- PASS: TestScheduledStopUnix (108.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1981784277 start -p running-upgrade-460754 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1981784277 start -p running-upgrade-460754 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (52.411801431s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-460754 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-460754 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.026763957s)
helpers_test.go:175: Cleaning up "running-upgrade-460754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-460754
--- PASS: TestRunningBinaryUpgrade (78.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (173.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.829811925s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-494977
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-494977: (1.797401726s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-494977 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-494977 status --format={{.Host}}: exit status 7 (81.240812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.464999758s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-494977 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (83.821936ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-494977] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-494977
	    minikube start -p kubernetes-upgrade-494977 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4949772 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-494977 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-494977 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.291861268s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-494977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-494977
--- PASS: TestKubernetesUpgrade (173.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestPause/serial/Start (107.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-448284 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-448284 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m47.893969786s)
--- PASS: TestPause/serial/Start (107.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3132434861 start -p stopped-upgrade-127730 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3132434861 start -p stopped-upgrade-127730 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m44.741149472s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3132434861 -p stopped-upgrade-127730 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3132434861 -p stopped-upgrade-127730 stop: (1.494520177s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-127730 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-127730 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.96781289s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-428422 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-428422 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (100.95829ms)

                                                
                                                
-- stdout --
	* [false-428422] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:13:23.856697  403059 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:13:23.857001  403059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:13:23.857014  403059 out.go:374] Setting ErrFile to fd 2...
	I0929 12:13:23.857021  403059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:13:23.857197  403059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-365455/.minikube/bin
	I0929 12:13:23.857720  403059 out.go:368] Setting JSON to false
	I0929 12:13:23.858749  403059 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6946,"bootTime":1759141058,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:13:23.858848  403059 start.go:140] virtualization: kvm guest
	I0929 12:13:23.860592  403059 out.go:179] * [false-428422] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:13:23.861880  403059 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:13:23.861895  403059 notify.go:220] Checking for updates...
	I0929 12:13:23.864230  403059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:13:23.865321  403059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	I0929 12:13:23.866519  403059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	I0929 12:13:23.867719  403059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:13:23.868742  403059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:13:23.870382  403059 config.go:182] Loaded profile config "offline-crio-130477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:13:23.870498  403059 config.go:182] Loaded profile config "pause-448284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:13:23.870598  403059 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:13:23.903661  403059 out.go:179] * Using the kvm2 driver based on user configuration
	I0929 12:13:23.904739  403059 start.go:304] selected driver: kvm2
	I0929 12:13:23.904753  403059 start.go:924] validating driver "kvm2" against <nil>
	I0929 12:13:23.904765  403059 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:13:23.906580  403059 out.go:203] 
	W0929 12:13:23.907703  403059 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 12:13:23.908734  403059 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-428422 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-428422" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-428422

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428422"

                                                
                                                
----------------------- debugLogs end: false-428422 [took: 3.023976698s] --------------------------------
helpers_test.go:175: Cleaning up "false-428422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-428422
--- PASS: TestNetworkPlugins/group/false (3.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657893 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-657893 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (78.255874ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-657893] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-365455/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-365455/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657893 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 12:14:15.918875  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:29.350314  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657893 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.542786573s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-657893 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657893 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657893 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (32.391966782s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-657893 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-657893 status -o json: exit status 2 (239.480931ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-657893","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-657893
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (39.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657893 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657893 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.18396489s)
--- PASS: TestNoKubernetes/serial/Start (39.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-127730
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-657893 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-657893 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.513036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (9.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (5.544625399s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.592598341s)
--- PASS: TestNoKubernetes/serial/ProfileList (9.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-657893
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-657893: (1.312838735s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657893 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657893 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.434804075s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-657893 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-657893 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.021503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (102.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-832485 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-832485 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m42.159937671s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (102.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-046125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 12:19:15.918672  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-046125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m26.916763284s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-832485 create -f testdata/busybox.yaml
E0929 12:19:46.269028  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [48285580-b36d-4148-9c00-2bfdabbed1b5] Pending
helpers_test.go:352: "busybox" [48285580-b36d-4148-9c00-2bfdabbed1b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [48285580-b36d-4148-9c00-2bfdabbed1b5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004331646s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-832485 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-832485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-832485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057873054s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-832485 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (74.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-832485 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-832485 --alsologtostderr -v=3: (1m14.928421852s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (74.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-046125 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [69b4896d-d584-4075-8f02-ad98a90b1a11] Pending
helpers_test.go:352: "busybox" [69b4896d-d584-4075-8f02-ad98a90b1a11] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [69b4896d-d584-4075-8f02-ad98a90b1a11] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.005351883s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-046125 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-046125 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-046125 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (80.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-046125 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-046125 --alsologtostderr -v=3: (1m20.851446791s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (80.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (97.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-790496 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-790496 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m37.930067541s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (97.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-832485 -n old-k8s-version-832485
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-832485 -n old-k8s-version-832485: exit status 7 (83.632203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-832485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (62.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-832485 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-832485 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m1.725532366s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-832485 -n old-k8s-version-832485
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (62.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046125 -n embed-certs-046125
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046125 -n embed-certs-046125: exit status 7 (93.303376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-046125 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-046125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-046125 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (54.36760819s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-046125 -n embed-certs-046125
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nt94z" [2f9e5a9a-0782-4f19-9102-c0b7a31bbb7e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nt94z" [2f9e5a9a-0782-4f19-9102-c0b7a31bbb7e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005991895s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nt94z" [2f9e5a9a-0782-4f19-9102-c0b7a31bbb7e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003809008s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-832485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-884616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-884616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m25.288492295s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-832485 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-832485 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-832485 -n old-k8s-version-832485
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-832485 -n old-k8s-version-832485: exit status 2 (304.735349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-832485 -n old-k8s-version-832485
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-832485 -n old-k8s-version-832485: exit status 2 (306.624136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-832485 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-832485 -n old-k8s-version-832485
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-832485 -n old-k8s-version-832485
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kkq5s" [df6d37b0-9eb9-4900-8272-a87a8d307fb8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kkq5s" [df6d37b0-9eb9-4900-8272-a87a8d307fb8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003575989s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-815943 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-815943 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m2.083124275s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-790496 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f9aa5017-0b5c-4e8c-95b8-1fac3864cbfa] Pending
helpers_test.go:352: "busybox" [f9aa5017-0b5c-4e8c-95b8-1fac3864cbfa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f9aa5017-0b5c-4e8c-95b8-1fac3864cbfa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005559887s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-790496 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kkq5s" [df6d37b0-9eb9-4900-8272-a87a8d307fb8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004534064s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-046125 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-046125 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-046125 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046125 -n embed-certs-046125
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046125 -n embed-certs-046125: exit status 2 (286.044386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-046125 -n embed-certs-046125
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-046125 -n embed-certs-046125: exit status 2 (289.607134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-046125 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-046125 --alsologtostderr -v=1: (1.015433316s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-046125 -n embed-certs-046125
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-046125 -n embed-certs-046125
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-790496 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-790496 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.338728566s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-790496 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (86.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-790496 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-790496 --alsologtostderr -v=3: (1m26.535411896s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (86.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m43.792406816s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-815943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-815943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.990472864s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-815943 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-815943 --alsologtostderr -v=3: (8.083957549s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-815943 -n newest-cni-815943
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-815943 -n newest-cni-815943: exit status 7 (80.423627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-815943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-815943 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0929 12:23:58.990472  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-815943 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (34.00017627s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-815943 -n newest-cni-815943
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-884616 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2edc515d-dcd4-48f5-a515-c67180c5224b] Pending
helpers_test.go:352: "busybox" [2edc515d-dcd4-48f5-a515-c67180c5224b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2edc515d-dcd4-48f5-a515-c67180c5224b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004499045s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-884616 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-884616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-884616 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (81.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-884616 --alsologtostderr -v=3
E0929 12:24:15.918231  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/functional-668607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-884616 --alsologtostderr -v=3: (1m21.683113239s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (81.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-790496 -n no-preload-790496
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-790496 -n no-preload-790496: exit status 7 (80.713775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-790496 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-790496 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-790496 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m0.379586615s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-790496 -n no-preload-790496
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-815943 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-815943 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-815943 -n newest-cni-815943
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-815943 -n newest-cni-815943: exit status 2 (275.888202ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-815943 -n newest-cni-815943
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-815943 -n newest-cni-815943: exit status 2 (269.219072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-815943 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-815943 -n newest-cni-815943
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-815943 -n newest-cni-815943
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (105.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m45.11583951s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (105.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-428422 "pgrep -a kubelet"
I0929 12:24:45.286032  369423 config.go:182] Loaded profile config "auto-428422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-428422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fclsz" [8cbae23f-e2f1-4267-8bc5-e1985a1dabec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 12:24:46.268742  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/addons-965504/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.293348  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.299832  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.311290  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.332804  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.374987  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.456708  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.618744  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:46.941018  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.582378  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:48.865173  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fclsz" [8cbae23f-e2f1-4267-8bc5-e1985a1dabec] Running
E0929 12:24:51.426903  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005506718s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-428422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 12:25:27.272082  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.37847527s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p684q" [0a8287eb-2502-45b4-aeba-ef7c5c9fc260] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p684q" [0a8287eb-2502-45b4-aeba-ef7c5c9fc260] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004078614s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616: exit status 7 (97.526545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-884616 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-884616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-884616 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (44.674420974s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p684q" [0a8287eb-2502-45b4-aeba-ef7c5c9fc260] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004441475s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-790496 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-790496 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-790496 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-790496 --alsologtostderr -v=1: (1.089261294s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-790496 -n no-preload-790496
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-790496 -n no-preload-790496: exit status 2 (317.223687ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-790496 -n no-preload-790496
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-790496 -n no-preload-790496: exit status 2 (308.42354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-790496 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-790496 --alsologtostderr -v=1: (1.024909789s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-790496 -n no-preload-790496
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-790496 -n no-preload-790496
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0929 12:26:08.234270  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/old-k8s-version-832485/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.624166116s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h29v5" [27f7ece9-e272-475b-95e4-e6ce0ffd5927] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004836856s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l77qv" [7fefe361-b9cf-4401-8f42-2626c991aa57] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l77qv" [7fefe361-b9cf-4401-8f42-2626c991aa57] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.00539023s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-428422 "pgrep -a kubelet"
I0929 12:26:26.156780  369423 config.go:182] Loaded profile config "kindnet-428422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-428422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vpsmz" [26bff6af-4296-4b0c-8080-df198a59dfcf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vpsmz" [26bff6af-4296-4b0c-8080-df198a59dfcf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004221998s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-dq8jg" [a5da5cdc-8ac7-4f3c-9e27-a5287d7babc6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-dq8jg" [a5da5cdc-8ac7-4f3c-9e27-a5287d7babc6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005185183s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-428422 "pgrep -a kubelet"
I0929 12:26:34.262439  369423 config.go:182] Loaded profile config "calico-428422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-428422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lhhlm" [16e767a9-5673-4887-ac58-50397223ac42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lhhlm" [16e767a9-5673-4887-ac58-50397223ac42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005803783s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-428422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l77qv" [7fefe361-b9cf-4401-8f42-2626c991aa57] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005865798s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-884616 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-428422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-884616 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-884616 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-884616 --alsologtostderr -v=1: (1.133021752s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616: exit status 2 (304.743867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616: exit status 2 (285.687938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-884616 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884616 -n default-k8s-diff-port-884616
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)
E0929 12:27:49.532562  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:49.539034  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:49.550533  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:49.572076  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:49.613635  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:49.695278  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:49.856923  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:50.179047  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:50.821178  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:52.103064  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:54.665476  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:27:59.787270  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (79.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.512002305s)
--- PASS: TestNetworkPlugins/group/flannel/Start (79.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m40.864770975s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (113.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-428422 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m53.109814968s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (113.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-428422 "pgrep -a kubelet"
I0929 12:27:12.644557  369423 config.go:182] Loaded profile config "custom-flannel-428422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-428422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g9fdr" [8e6e527d-846f-486f-8442-80af1e913c73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g9fdr" [8e6e527d-846f-486f-8442-80af1e913c73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005254708s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-428422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-pbmpp" [ae41da56-e57b-4673-a70c-d0a410083adf] Running
E0929 12:28:10.028948  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/no-preload-790496/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004228484s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-428422 "pgrep -a kubelet"
I0929 12:28:15.483199  369423 config.go:182] Loaded profile config "flannel-428422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-428422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mvk5w" [6fb89b5b-2b27-417b-a55a-57d989b44c1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mvk5w" [6fb89b5b-2b27-417b-a55a-57d989b44c1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004569486s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-428422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-428422 "pgrep -a kubelet"
I0929 12:28:35.096407  369423 config.go:182] Loaded profile config "bridge-428422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-428422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dkrl7" [3dbeae1e-33ee-450a-b05b-107e5deba299] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dkrl7" [3dbeae1e-33ee-450a-b05b-107e5deba299] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004071625s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-428422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-428422 "pgrep -a kubelet"
I0929 12:28:56.005117  369423 config.go:182] Loaded profile config "enable-default-cni-428422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-428422 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qmqb9" [619c3915-1d02-4ef9-8acb-38281fa8f650] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qmqb9" [619c3915-1d02-4ef9-8acb-38281fa8f650] Running
E0929 12:29:01.351444  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/default-k8s-diff-port-884616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:29:01.357907  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/default-k8s-diff-port-884616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:29:01.369281  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/default-k8s-diff-port-884616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:29:01.390869  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/default-k8s-diff-port-884616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:29:01.432149  369423 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-365455/.minikube/profiles/default-k8s-diff-port-884616/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003560575s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-428422 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-428422 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.21
263 TestStartStop/group/disable-driver-mounts 0.15
275 TestNetworkPlugins/group/cilium 3.65
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-965504 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-428422 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-428422" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-428422

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428422"

                                                
                                                
----------------------- debugLogs end: kubenet-428422 [took: 3.050697721s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-428422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-428422
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-513981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-513981
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-428422 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-428422" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-428422

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-428422" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428422"

                                                
                                                
----------------------- debugLogs end: cilium-428422 [took: 3.476732381s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-428422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-428422
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
Copied to clipboard