Test Report: KVM_Linux_crio 21683

                    
                      ec1ad263eb9d75fb579dc5b6c2680f618af3e384:2025-10-09:41836
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 160.39
244 TestPreload 164.87
289 TestPause/serial/SecondStartNoReconfiguration 81.46
x
+
TestAddons/parallel/Ingress (160.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-916037 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-916037 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-916037 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7461112d-e3eb-4015-adf9-246e185bff35] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7461112d-e3eb-4015-adf9-246e185bff35] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.00515406s
I1009 18:43:40.463668  140358 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916037 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.31182171s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-916037 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.158
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-916037 -n addons-916037
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 logs -n 25: (1.419079931s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-625858                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-625858 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ --download-only -p binary-mirror-222043 --alsologtostderr --binary-mirror http://127.0.0.1:39753 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-222043 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ delete  │ -p binary-mirror-222043                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-222043 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ addons  │ disable dashboard -p addons-916037                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ addons  │ enable dashboard -p addons-916037                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	│ start   │ -p addons-916037 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:42 UTC │
	│ addons  │ addons-916037 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:42 UTC │ 09 Oct 25 18:42 UTC │
	│ addons  │ addons-916037 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ enable headlamp -p addons-916037 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ ip      │ addons-916037 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ ssh     │ addons-916037 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │                     │
	│ addons  │ addons-916037 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-916037                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ ssh     │ addons-916037 ssh cat /opt/local-path-provisioner/pvc-cf70288e-af26-477d-beee-bb5695fd7609_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:44 UTC │
	│ addons  │ addons-916037 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:43 UTC │ 09 Oct 25 18:43 UTC │
	│ addons  │ addons-916037 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ addons  │ addons-916037 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ ip      │ addons-916037 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-916037        │ jenkins │ v1.37.0 │ 09 Oct 25 18:45 UTC │ 09 Oct 25 18:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:39:31
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:39:31.201892  141076 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:39:31.202192  141076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:31.202218  141076 out.go:374] Setting ErrFile to fd 2...
	I1009 18:39:31.202225  141076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:31.202434  141076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 18:39:31.203068  141076 out.go:368] Setting JSON to false
	I1009 18:39:31.204028  141076 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4911,"bootTime":1760030260,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:39:31.204118  141076 start.go:143] virtualization: kvm guest
	I1009 18:39:31.205615  141076 out.go:179] * [addons-916037] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:39:31.206720  141076 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 18:39:31.206725  141076 notify.go:221] Checking for updates...
	I1009 18:39:31.208613  141076 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:39:31.209803  141076 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 18:39:31.210859  141076 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 18:39:31.211927  141076 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:39:31.212977  141076 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:39:31.214105  141076 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:39:31.245040  141076 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 18:39:31.246009  141076 start.go:309] selected driver: kvm2
	I1009 18:39:31.246023  141076 start.go:930] validating driver "kvm2" against <nil>
	I1009 18:39:31.246034  141076 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:39:31.246711  141076 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:39:31.246788  141076 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:39:31.260599  141076 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:39:31.260634  141076 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:39:31.274315  141076 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:39:31.274360  141076 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:39:31.274639  141076 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:39:31.274673  141076 cni.go:84] Creating CNI manager for ""
	I1009 18:39:31.274717  141076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:39:31.274725  141076 start_flags.go:337] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:39:31.274770  141076 start.go:353] cluster config:
	{Name:addons-916037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-916037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1009 18:39:31.274867  141076 iso.go:125] acquiring lock: {Name:mk98a4af23a55ce5e8a323d2964def6dd3fc61ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:39:31.276366  141076 out.go:179] * Starting "addons-916037" primary control-plane node in "addons-916037" cluster
	I1009 18:39:31.277228  141076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:39:31.277272  141076 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:39:31.277283  141076 cache.go:58] Caching tarball of preloaded images
	I1009 18:39:31.277402  141076 preload.go:233] Found /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:39:31.277415  141076 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:39:31.277758  141076 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/config.json ...
	I1009 18:39:31.277782  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/config.json: {Name:mk0b22309cca3a09550681aaaa4aec68194fd7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:31.277923  141076 start.go:361] acquireMachinesLock for addons-916037: {Name:mkb52a311831bedb463a7965f6666d89b7fa391a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:39:31.277976  141076 start.go:365] duration metric: took 40.353µs to acquireMachinesLock for "addons-916037"
	I1009 18:39:31.278000  141076 start.go:94] Provisioning new machine with config: &{Name:addons-916037 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-916037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:39:31.278048  141076 start.go:126] createHost starting for "" (driver="kvm2")
	I1009 18:39:31.279333  141076 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1009 18:39:31.279443  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:39:31.279479  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:39:31.292549  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35997
	I1009 18:39:31.293017  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:39:31.293616  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:39:31.293649  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:39:31.293999  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:39:31.294193  141076 main.go:141] libmachine: (addons-916037) Calling .GetMachineName
	I1009 18:39:31.294351  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:31.294483  141076 start.go:160] libmachine.API.Create for "addons-916037" (driver="kvm2")
	I1009 18:39:31.294515  141076 client.go:168] LocalClient.Create starting
	I1009 18:39:31.294569  141076 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem
	I1009 18:39:31.719017  141076 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem
	I1009 18:39:32.164911  141076 main.go:141] libmachine: Running pre-create checks...
	I1009 18:39:32.164936  141076 main.go:141] libmachine: (addons-916037) Calling .PreCreateCheck
	I1009 18:39:32.165455  141076 main.go:141] libmachine: (addons-916037) Calling .GetConfigRaw
	I1009 18:39:32.165930  141076 main.go:141] libmachine: Creating machine...
	I1009 18:39:32.165945  141076 main.go:141] libmachine: (addons-916037) Calling .Create
	I1009 18:39:32.166126  141076 main.go:141] libmachine: (addons-916037) creating domain...
	I1009 18:39:32.166143  141076 main.go:141] libmachine: (addons-916037) creating network...
	I1009 18:39:32.167646  141076 main.go:141] libmachine: (addons-916037) DBG | found existing default network
	I1009 18:39:32.167787  141076 main.go:141] libmachine: (addons-916037) DBG | <network>
	I1009 18:39:32.167805  141076 main.go:141] libmachine: (addons-916037) DBG |   <name>default</name>
	I1009 18:39:32.167817  141076 main.go:141] libmachine: (addons-916037) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1009 18:39:32.167834  141076 main.go:141] libmachine: (addons-916037) DBG |   <forward mode='nat'>
	I1009 18:39:32.167843  141076 main.go:141] libmachine: (addons-916037) DBG |     <nat>
	I1009 18:39:32.167854  141076 main.go:141] libmachine: (addons-916037) DBG |       <port start='1024' end='65535'/>
	I1009 18:39:32.167863  141076 main.go:141] libmachine: (addons-916037) DBG |     </nat>
	I1009 18:39:32.167873  141076 main.go:141] libmachine: (addons-916037) DBG |   </forward>
	I1009 18:39:32.167883  141076 main.go:141] libmachine: (addons-916037) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1009 18:39:32.167894  141076 main.go:141] libmachine: (addons-916037) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1009 18:39:32.167904  141076 main.go:141] libmachine: (addons-916037) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1009 18:39:32.167922  141076 main.go:141] libmachine: (addons-916037) DBG |     <dhcp>
	I1009 18:39:32.167941  141076 main.go:141] libmachine: (addons-916037) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1009 18:39:32.167960  141076 main.go:141] libmachine: (addons-916037) DBG |     </dhcp>
	I1009 18:39:32.167993  141076 main.go:141] libmachine: (addons-916037) DBG |   </ip>
	I1009 18:39:32.168016  141076 main.go:141] libmachine: (addons-916037) DBG | </network>
	I1009 18:39:32.168050  141076 main.go:141] libmachine: (addons-916037) DBG | 
	I1009 18:39:32.168395  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:32.168230  141104 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123550}
	I1009 18:39:32.168423  141076 main.go:141] libmachine: (addons-916037) DBG | defining private network:
	I1009 18:39:32.168440  141076 main.go:141] libmachine: (addons-916037) DBG | 
	I1009 18:39:32.168450  141076 main.go:141] libmachine: (addons-916037) DBG | <network>
	I1009 18:39:32.168467  141076 main.go:141] libmachine: (addons-916037) DBG |   <name>mk-addons-916037</name>
	I1009 18:39:32.168481  141076 main.go:141] libmachine: (addons-916037) DBG |   <dns enable='no'/>
	I1009 18:39:32.168490  141076 main.go:141] libmachine: (addons-916037) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 18:39:32.168499  141076 main.go:141] libmachine: (addons-916037) DBG |     <dhcp>
	I1009 18:39:32.168511  141076 main.go:141] libmachine: (addons-916037) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 18:39:32.168522  141076 main.go:141] libmachine: (addons-916037) DBG |     </dhcp>
	I1009 18:39:32.168529  141076 main.go:141] libmachine: (addons-916037) DBG |   </ip>
	I1009 18:39:32.168540  141076 main.go:141] libmachine: (addons-916037) DBG | </network>
	I1009 18:39:32.168552  141076 main.go:141] libmachine: (addons-916037) DBG | 
	I1009 18:39:32.173902  141076 main.go:141] libmachine: (addons-916037) DBG | creating private network mk-addons-916037 192.168.39.0/24...
	I1009 18:39:32.238040  141076 main.go:141] libmachine: (addons-916037) DBG | private network mk-addons-916037 192.168.39.0/24 created
	I1009 18:39:32.238321  141076 main.go:141] libmachine: (addons-916037) DBG | <network>
	I1009 18:39:32.238339  141076 main.go:141] libmachine: (addons-916037) DBG |   <name>mk-addons-916037</name>
	I1009 18:39:32.238350  141076 main.go:141] libmachine: (addons-916037) setting up store path in /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037 ...
	I1009 18:39:32.238379  141076 main.go:141] libmachine: (addons-916037) building disk image from file:///home/jenkins/minikube-integration/21683-136449/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 18:39:32.238402  141076 main.go:141] libmachine: (addons-916037) DBG |   <uuid>cd0f64d0-6c92-4274-820f-8d82279dbf66</uuid>
	I1009 18:39:32.238413  141076 main.go:141] libmachine: (addons-916037) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1009 18:39:32.238421  141076 main.go:141] libmachine: (addons-916037) DBG |   <mac address='52:54:00:b3:f9:dc'/>
	I1009 18:39:32.238430  141076 main.go:141] libmachine: (addons-916037) DBG |   <dns enable='no'/>
	I1009 18:39:32.238438  141076 main.go:141] libmachine: (addons-916037) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 18:39:32.238457  141076 main.go:141] libmachine: (addons-916037) Downloading /home/jenkins/minikube-integration/21683-136449/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21683-136449/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1009 18:39:32.238475  141076 main.go:141] libmachine: (addons-916037) DBG |     <dhcp>
	I1009 18:39:32.238485  141076 main.go:141] libmachine: (addons-916037) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 18:39:32.238492  141076 main.go:141] libmachine: (addons-916037) DBG |     </dhcp>
	I1009 18:39:32.238498  141076 main.go:141] libmachine: (addons-916037) DBG |   </ip>
	I1009 18:39:32.238505  141076 main.go:141] libmachine: (addons-916037) DBG | </network>
	I1009 18:39:32.238511  141076 main.go:141] libmachine: (addons-916037) DBG | 
	I1009 18:39:32.238538  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:32.238303  141104 common.go:147] Making disk image using store path: /home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 18:39:32.515348  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:32.515230  141104 common.go:154] Creating ssh key: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa...
	I1009 18:39:33.513116  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:33.512966  141104 common.go:160] Creating raw disk image: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/addons-916037.rawdisk...
	I1009 18:39:33.513143  141076 main.go:141] libmachine: (addons-916037) DBG | Writing magic tar header
	I1009 18:39:33.513158  141076 main.go:141] libmachine: (addons-916037) DBG | Writing SSH key tar header
	I1009 18:39:33.513166  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:33.513125  141104 common.go:174] Fixing permissions on /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037 ...
	I1009 18:39:33.513313  141076 main.go:141] libmachine: (addons-916037) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037
	I1009 18:39:33.513349  141076 main.go:141] libmachine: (addons-916037) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-136449/.minikube/machines
	I1009 18:39:33.513364  141076 main.go:141] libmachine: (addons-916037) setting executable bit set on /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037 (perms=drwx------)
	I1009 18:39:33.513378  141076 main.go:141] libmachine: (addons-916037) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 18:39:33.513394  141076 main.go:141] libmachine: (addons-916037) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-136449
	I1009 18:39:33.513404  141076 main.go:141] libmachine: (addons-916037) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1009 18:39:33.513413  141076 main.go:141] libmachine: (addons-916037) DBG | checking permissions on dir: /home/jenkins
	I1009 18:39:33.513429  141076 main.go:141] libmachine: (addons-916037) setting executable bit set on /home/jenkins/minikube-integration/21683-136449/.minikube/machines (perms=drwxr-xr-x)
	I1009 18:39:33.513442  141076 main.go:141] libmachine: (addons-916037) DBG | checking permissions on dir: /home
	I1009 18:39:33.513455  141076 main.go:141] libmachine: (addons-916037) setting executable bit set on /home/jenkins/minikube-integration/21683-136449/.minikube (perms=drwxr-xr-x)
	I1009 18:39:33.513466  141076 main.go:141] libmachine: (addons-916037) DBG | skipping /home - not owner
	I1009 18:39:33.513487  141076 main.go:141] libmachine: (addons-916037) setting executable bit set on /home/jenkins/minikube-integration/21683-136449 (perms=drwxrwxr-x)
	I1009 18:39:33.513504  141076 main.go:141] libmachine: (addons-916037) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 18:39:33.513517  141076 main.go:141] libmachine: (addons-916037) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 18:39:33.513527  141076 main.go:141] libmachine: (addons-916037) defining domain...
	I1009 18:39:33.514657  141076 main.go:141] libmachine: (addons-916037) defining domain using XML: 
	I1009 18:39:33.514676  141076 main.go:141] libmachine: (addons-916037) <domain type='kvm'>
	I1009 18:39:33.514682  141076 main.go:141] libmachine: (addons-916037)   <name>addons-916037</name>
	I1009 18:39:33.514687  141076 main.go:141] libmachine: (addons-916037)   <memory unit='MiB'>4096</memory>
	I1009 18:39:33.514692  141076 main.go:141] libmachine: (addons-916037)   <vcpu>2</vcpu>
	I1009 18:39:33.514695  141076 main.go:141] libmachine: (addons-916037)   <features>
	I1009 18:39:33.514702  141076 main.go:141] libmachine: (addons-916037)     <acpi/>
	I1009 18:39:33.514708  141076 main.go:141] libmachine: (addons-916037)     <apic/>
	I1009 18:39:33.514716  141076 main.go:141] libmachine: (addons-916037)     <pae/>
	I1009 18:39:33.514737  141076 main.go:141] libmachine: (addons-916037)   </features>
	I1009 18:39:33.514749  141076 main.go:141] libmachine: (addons-916037)   <cpu mode='host-passthrough'>
	I1009 18:39:33.514753  141076 main.go:141] libmachine: (addons-916037)   </cpu>
	I1009 18:39:33.514757  141076 main.go:141] libmachine: (addons-916037)   <os>
	I1009 18:39:33.514765  141076 main.go:141] libmachine: (addons-916037)     <type>hvm</type>
	I1009 18:39:33.514787  141076 main.go:141] libmachine: (addons-916037)     <boot dev='cdrom'/>
	I1009 18:39:33.514808  141076 main.go:141] libmachine: (addons-916037)     <boot dev='hd'/>
	I1009 18:39:33.514831  141076 main.go:141] libmachine: (addons-916037)     <bootmenu enable='no'/>
	I1009 18:39:33.514841  141076 main.go:141] libmachine: (addons-916037)   </os>
	I1009 18:39:33.514849  141076 main.go:141] libmachine: (addons-916037)   <devices>
	I1009 18:39:33.514859  141076 main.go:141] libmachine: (addons-916037)     <disk type='file' device='cdrom'>
	I1009 18:39:33.514871  141076 main.go:141] libmachine: (addons-916037)       <source file='/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/boot2docker.iso'/>
	I1009 18:39:33.514876  141076 main.go:141] libmachine: (addons-916037)       <target dev='hdc' bus='scsi'/>
	I1009 18:39:33.514897  141076 main.go:141] libmachine: (addons-916037)       <readonly/>
	I1009 18:39:33.514916  141076 main.go:141] libmachine: (addons-916037)     </disk>
	I1009 18:39:33.514932  141076 main.go:141] libmachine: (addons-916037)     <disk type='file' device='disk'>
	I1009 18:39:33.514945  141076 main.go:141] libmachine: (addons-916037)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 18:39:33.514962  141076 main.go:141] libmachine: (addons-916037)       <source file='/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/addons-916037.rawdisk'/>
	I1009 18:39:33.514972  141076 main.go:141] libmachine: (addons-916037)       <target dev='hda' bus='virtio'/>
	I1009 18:39:33.514978  141076 main.go:141] libmachine: (addons-916037)     </disk>
	I1009 18:39:33.514985  141076 main.go:141] libmachine: (addons-916037)     <interface type='network'>
	I1009 18:39:33.514991  141076 main.go:141] libmachine: (addons-916037)       <source network='mk-addons-916037'/>
	I1009 18:39:33.514995  141076 main.go:141] libmachine: (addons-916037)       <model type='virtio'/>
	I1009 18:39:33.514999  141076 main.go:141] libmachine: (addons-916037)     </interface>
	I1009 18:39:33.515003  141076 main.go:141] libmachine: (addons-916037)     <interface type='network'>
	I1009 18:39:33.515008  141076 main.go:141] libmachine: (addons-916037)       <source network='default'/>
	I1009 18:39:33.515018  141076 main.go:141] libmachine: (addons-916037)       <model type='virtio'/>
	I1009 18:39:33.515032  141076 main.go:141] libmachine: (addons-916037)     </interface>
	I1009 18:39:33.515044  141076 main.go:141] libmachine: (addons-916037)     <serial type='pty'>
	I1009 18:39:33.515057  141076 main.go:141] libmachine: (addons-916037)       <target port='0'/>
	I1009 18:39:33.515066  141076 main.go:141] libmachine: (addons-916037)     </serial>
	I1009 18:39:33.515074  141076 main.go:141] libmachine: (addons-916037)     <console type='pty'>
	I1009 18:39:33.515084  141076 main.go:141] libmachine: (addons-916037)       <target type='serial' port='0'/>
	I1009 18:39:33.515092  141076 main.go:141] libmachine: (addons-916037)     </console>
	I1009 18:39:33.515102  141076 main.go:141] libmachine: (addons-916037)     <rng model='virtio'>
	I1009 18:39:33.515113  141076 main.go:141] libmachine: (addons-916037)       <backend model='random'>/dev/random</backend>
	I1009 18:39:33.515123  141076 main.go:141] libmachine: (addons-916037)     </rng>
	I1009 18:39:33.515144  141076 main.go:141] libmachine: (addons-916037)   </devices>
	I1009 18:39:33.515158  141076 main.go:141] libmachine: (addons-916037) </domain>
	I1009 18:39:33.515174  141076 main.go:141] libmachine: (addons-916037) 
	I1009 18:39:33.519586  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:22:e7:f7 in network default
	I1009 18:39:33.520148  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:33.520167  141076 main.go:141] libmachine: (addons-916037) starting domain...
	I1009 18:39:33.520179  141076 main.go:141] libmachine: (addons-916037) ensuring networks are active...
	I1009 18:39:33.520833  141076 main.go:141] libmachine: (addons-916037) Ensuring network default is active
	I1009 18:39:33.521177  141076 main.go:141] libmachine: (addons-916037) Ensuring network mk-addons-916037 is active
	I1009 18:39:33.521698  141076 main.go:141] libmachine: (addons-916037) getting domain XML...
	I1009 18:39:33.522627  141076 main.go:141] libmachine: (addons-916037) DBG | starting domain XML:
	I1009 18:39:33.522637  141076 main.go:141] libmachine: (addons-916037) DBG | <domain type='kvm'>
	I1009 18:39:33.522643  141076 main.go:141] libmachine: (addons-916037) DBG |   <name>addons-916037</name>
	I1009 18:39:33.522648  141076 main.go:141] libmachine: (addons-916037) DBG |   <uuid>4607c46f-1189-4214-8edd-608677ba036a</uuid>
	I1009 18:39:33.522654  141076 main.go:141] libmachine: (addons-916037) DBG |   <memory unit='KiB'>4194304</memory>
	I1009 18:39:33.522658  141076 main.go:141] libmachine: (addons-916037) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1009 18:39:33.522664  141076 main.go:141] libmachine: (addons-916037) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 18:39:33.522668  141076 main.go:141] libmachine: (addons-916037) DBG |   <os>
	I1009 18:39:33.522674  141076 main.go:141] libmachine: (addons-916037) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 18:39:33.522680  141076 main.go:141] libmachine: (addons-916037) DBG |     <boot dev='cdrom'/>
	I1009 18:39:33.522685  141076 main.go:141] libmachine: (addons-916037) DBG |     <boot dev='hd'/>
	I1009 18:39:33.522702  141076 main.go:141] libmachine: (addons-916037) DBG |     <bootmenu enable='no'/>
	I1009 18:39:33.522709  141076 main.go:141] libmachine: (addons-916037) DBG |   </os>
	I1009 18:39:33.522713  141076 main.go:141] libmachine: (addons-916037) DBG |   <features>
	I1009 18:39:33.522722  141076 main.go:141] libmachine: (addons-916037) DBG |     <acpi/>
	I1009 18:39:33.522741  141076 main.go:141] libmachine: (addons-916037) DBG |     <apic/>
	I1009 18:39:33.522753  141076 main.go:141] libmachine: (addons-916037) DBG |     <pae/>
	I1009 18:39:33.522764  141076 main.go:141] libmachine: (addons-916037) DBG |   </features>
	I1009 18:39:33.522777  141076 main.go:141] libmachine: (addons-916037) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 18:39:33.522786  141076 main.go:141] libmachine: (addons-916037) DBG |   <clock offset='utc'/>
	I1009 18:39:33.522795  141076 main.go:141] libmachine: (addons-916037) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 18:39:33.522799  141076 main.go:141] libmachine: (addons-916037) DBG |   <on_reboot>restart</on_reboot>
	I1009 18:39:33.522806  141076 main.go:141] libmachine: (addons-916037) DBG |   <on_crash>destroy</on_crash>
	I1009 18:39:33.522810  141076 main.go:141] libmachine: (addons-916037) DBG |   <devices>
	I1009 18:39:33.522818  141076 main.go:141] libmachine: (addons-916037) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 18:39:33.522823  141076 main.go:141] libmachine: (addons-916037) DBG |     <disk type='file' device='cdrom'>
	I1009 18:39:33.522831  141076 main.go:141] libmachine: (addons-916037) DBG |       <driver name='qemu' type='raw'/>
	I1009 18:39:33.522845  141076 main.go:141] libmachine: (addons-916037) DBG |       <source file='/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/boot2docker.iso'/>
	I1009 18:39:33.522858  141076 main.go:141] libmachine: (addons-916037) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 18:39:33.522869  141076 main.go:141] libmachine: (addons-916037) DBG |       <readonly/>
	I1009 18:39:33.522891  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 18:39:33.522912  141076 main.go:141] libmachine: (addons-916037) DBG |     </disk>
	I1009 18:39:33.522921  141076 main.go:141] libmachine: (addons-916037) DBG |     <disk type='file' device='disk'>
	I1009 18:39:33.522933  141076 main.go:141] libmachine: (addons-916037) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 18:39:33.522954  141076 main.go:141] libmachine: (addons-916037) DBG |       <source file='/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/addons-916037.rawdisk'/>
	I1009 18:39:33.522973  141076 main.go:141] libmachine: (addons-916037) DBG |       <target dev='hda' bus='virtio'/>
	I1009 18:39:33.523007  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 18:39:33.523029  141076 main.go:141] libmachine: (addons-916037) DBG |     </disk>
	I1009 18:39:33.523050  141076 main.go:141] libmachine: (addons-916037) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 18:39:33.523077  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 18:39:33.523091  141076 main.go:141] libmachine: (addons-916037) DBG |     </controller>
	I1009 18:39:33.523108  141076 main.go:141] libmachine: (addons-916037) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 18:39:33.523123  141076 main.go:141] libmachine: (addons-916037) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 18:39:33.523136  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 18:39:33.523148  141076 main.go:141] libmachine: (addons-916037) DBG |     </controller>
	I1009 18:39:33.523154  141076 main.go:141] libmachine: (addons-916037) DBG |     <interface type='network'>
	I1009 18:39:33.523161  141076 main.go:141] libmachine: (addons-916037) DBG |       <mac address='52:54:00:8f:3a:a5'/>
	I1009 18:39:33.523175  141076 main.go:141] libmachine: (addons-916037) DBG |       <source network='mk-addons-916037'/>
	I1009 18:39:33.523185  141076 main.go:141] libmachine: (addons-916037) DBG |       <model type='virtio'/>
	I1009 18:39:33.523195  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 18:39:33.523215  141076 main.go:141] libmachine: (addons-916037) DBG |     </interface>
	I1009 18:39:33.523227  141076 main.go:141] libmachine: (addons-916037) DBG |     <interface type='network'>
	I1009 18:39:33.523241  141076 main.go:141] libmachine: (addons-916037) DBG |       <mac address='52:54:00:22:e7:f7'/>
	I1009 18:39:33.523246  141076 main.go:141] libmachine: (addons-916037) DBG |       <source network='default'/>
	I1009 18:39:33.523252  141076 main.go:141] libmachine: (addons-916037) DBG |       <model type='virtio'/>
	I1009 18:39:33.523260  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 18:39:33.523267  141076 main.go:141] libmachine: (addons-916037) DBG |     </interface>
	I1009 18:39:33.523272  141076 main.go:141] libmachine: (addons-916037) DBG |     <serial type='pty'>
	I1009 18:39:33.523278  141076 main.go:141] libmachine: (addons-916037) DBG |       <target type='isa-serial' port='0'>
	I1009 18:39:33.523285  141076 main.go:141] libmachine: (addons-916037) DBG |         <model name='isa-serial'/>
	I1009 18:39:33.523290  141076 main.go:141] libmachine: (addons-916037) DBG |       </target>
	I1009 18:39:33.523294  141076 main.go:141] libmachine: (addons-916037) DBG |     </serial>
	I1009 18:39:33.523299  141076 main.go:141] libmachine: (addons-916037) DBG |     <console type='pty'>
	I1009 18:39:33.523312  141076 main.go:141] libmachine: (addons-916037) DBG |       <target type='serial' port='0'/>
	I1009 18:39:33.523319  141076 main.go:141] libmachine: (addons-916037) DBG |     </console>
	I1009 18:39:33.523323  141076 main.go:141] libmachine: (addons-916037) DBG |     <input type='mouse' bus='ps2'/>
	I1009 18:39:33.523331  141076 main.go:141] libmachine: (addons-916037) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 18:39:33.523343  141076 main.go:141] libmachine: (addons-916037) DBG |     <audio id='1' type='none'/>
	I1009 18:39:33.523352  141076 main.go:141] libmachine: (addons-916037) DBG |     <memballoon model='virtio'>
	I1009 18:39:33.523357  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 18:39:33.523365  141076 main.go:141] libmachine: (addons-916037) DBG |     </memballoon>
	I1009 18:39:33.523369  141076 main.go:141] libmachine: (addons-916037) DBG |     <rng model='virtio'>
	I1009 18:39:33.523375  141076 main.go:141] libmachine: (addons-916037) DBG |       <backend model='random'>/dev/random</backend>
	I1009 18:39:33.523383  141076 main.go:141] libmachine: (addons-916037) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 18:39:33.523388  141076 main.go:141] libmachine: (addons-916037) DBG |     </rng>
	I1009 18:39:33.523394  141076 main.go:141] libmachine: (addons-916037) DBG |   </devices>
	I1009 18:39:33.523399  141076 main.go:141] libmachine: (addons-916037) DBG | </domain>
	I1009 18:39:33.523405  141076 main.go:141] libmachine: (addons-916037) DBG | 
	I1009 18:39:34.750717  141076 main.go:141] libmachine: (addons-916037) waiting for domain to start...
	I1009 18:39:34.751985  141076 main.go:141] libmachine: (addons-916037) domain is now running
	I1009 18:39:34.752010  141076 main.go:141] libmachine: (addons-916037) waiting for IP...
	I1009 18:39:34.752757  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:34.753224  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:34.753243  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:34.753547  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:34.753624  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:34.753547  141104 retry.go:31] will retry after 252.009582ms: waiting for domain to come up
	I1009 18:39:35.007106  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:35.007593  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:35.007618  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:35.007955  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:35.007985  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:35.007927  141104 retry.go:31] will retry after 314.177504ms: waiting for domain to come up
	I1009 18:39:35.323450  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:35.324047  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:35.324073  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:35.324306  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:35.324373  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:35.324307  141104 retry.go:31] will retry after 351.647375ms: waiting for domain to come up
	I1009 18:39:35.677921  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:35.678511  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:35.678531  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:35.678808  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:35.678876  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:35.678792  141104 retry.go:31] will retry after 483.690887ms: waiting for domain to come up
	I1009 18:39:36.164477  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:36.165019  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:36.165041  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:36.165328  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:36.165354  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:36.165294  141104 retry.go:31] will retry after 758.034393ms: waiting for domain to come up
	I1009 18:39:36.924837  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:36.925623  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:36.925669  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:36.925958  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:36.925992  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:36.925935  141104 retry.go:31] will retry after 885.45057ms: waiting for domain to come up
	I1009 18:39:37.813188  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:37.813706  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:37.813729  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:37.813996  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:37.814027  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:37.813982  141104 retry.go:31] will retry after 805.715097ms: waiting for domain to come up
	I1009 18:39:38.620962  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:38.621465  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:38.621491  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:38.621853  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:38.621881  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:38.621805  141104 retry.go:31] will retry after 1.070369882s: waiting for domain to come up
	I1009 18:39:39.694210  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:39.694778  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:39.694810  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:39.695058  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:39.695081  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:39.695025  141104 retry.go:31] will retry after 1.208492874s: waiting for domain to come up
	I1009 18:39:40.905731  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:40.906217  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:40.906245  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:40.906481  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:40.906507  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:40.906456  141104 retry.go:31] will retry after 1.491629625s: waiting for domain to come up
	I1009 18:39:42.399478  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:42.400028  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:42.400052  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:42.400321  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:42.400350  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:42.400282  141104 retry.go:31] will retry after 2.301739635s: waiting for domain to come up
	I1009 18:39:44.705109  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:44.705725  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:44.705752  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:44.706047  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:44.706085  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:44.706044  141104 retry.go:31] will retry after 3.018415881s: waiting for domain to come up
	I1009 18:39:47.726877  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:47.727424  141076 main.go:141] libmachine: (addons-916037) DBG | no network interface addresses found for domain addons-916037 (source=lease)
	I1009 18:39:47.727448  141076 main.go:141] libmachine: (addons-916037) DBG | trying to list again with source=arp
	I1009 18:39:47.727734  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find current IP address of domain addons-916037 in network mk-addons-916037 (interfaces detected: [])
	I1009 18:39:47.727757  141076 main.go:141] libmachine: (addons-916037) DBG | I1009 18:39:47.727709  141104 retry.go:31] will retry after 3.000890926s: waiting for domain to come up
	I1009 18:39:50.731889  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:50.732410  141076 main.go:141] libmachine: (addons-916037) found domain IP: 192.168.39.158
	I1009 18:39:50.732437  141076 main.go:141] libmachine: (addons-916037) reserving static IP address...
	I1009 18:39:50.732451  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has current primary IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:50.732834  141076 main.go:141] libmachine: (addons-916037) DBG | unable to find host DHCP lease matching {name: "addons-916037", mac: "52:54:00:8f:3a:a5", ip: "192.168.39.158"} in network mk-addons-916037
	I1009 18:39:50.906691  141076 main.go:141] libmachine: (addons-916037) DBG | Getting to WaitForSSH function...
	I1009 18:39:50.906847  141076 main.go:141] libmachine: (addons-916037) reserved static IP address 192.168.39.158 for domain addons-916037
	I1009 18:39:50.906869  141076 main.go:141] libmachine: (addons-916037) waiting for SSH...
	I1009 18:39:50.909493  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:50.909946  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:50.909991  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:50.910277  141076 main.go:141] libmachine: (addons-916037) DBG | Using SSH client type: external
	I1009 18:39:50.910308  141076 main.go:141] libmachine: (addons-916037) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa (-rw-------)
	I1009 18:39:50.910359  141076 main.go:141] libmachine: (addons-916037) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:39:50.910382  141076 main.go:141] libmachine: (addons-916037) DBG | About to run SSH command:
	I1009 18:39:50.910408  141076 main.go:141] libmachine: (addons-916037) DBG | exit 0
	I1009 18:39:51.040359  141076 main.go:141] libmachine: (addons-916037) DBG | SSH cmd err, output: <nil>: 
	I1009 18:39:51.040703  141076 main.go:141] libmachine: (addons-916037) domain creation complete
	I1009 18:39:51.041111  141076 main.go:141] libmachine: (addons-916037) Calling .GetConfigRaw
	I1009 18:39:51.041787  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:51.042011  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:51.042226  141076 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 18:39:51.042248  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:39:51.044022  141076 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 18:39:51.044037  141076 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 18:39:51.044043  141076 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 18:39:51.044048  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:51.046923  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.047334  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:51.047362  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.047519  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:51.047715  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.047898  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.048046  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:51.048230  141076 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:51.048553  141076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I1009 18:39:51.048587  141076 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 18:39:51.157102  141076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:39:51.157126  141076 main.go:141] libmachine: Detecting the provisioner...
	I1009 18:39:51.157134  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:51.160056  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.160421  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:51.160455  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.160616  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:51.160818  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.160981  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.161117  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:51.161269  141076 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:51.161447  141076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I1009 18:39:51.161458  141076 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 18:39:51.268135  141076 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 18:39:51.268199  141076 main.go:141] libmachine: found compatible host: buildroot
	I1009 18:39:51.268210  141076 main.go:141] libmachine: Provisioning with buildroot...
	I1009 18:39:51.268219  141076 main.go:141] libmachine: (addons-916037) Calling .GetMachineName
	I1009 18:39:51.268480  141076 buildroot.go:166] provisioning hostname "addons-916037"
	I1009 18:39:51.268505  141076 main.go:141] libmachine: (addons-916037) Calling .GetMachineName
	I1009 18:39:51.268719  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:51.271553  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.271925  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:51.271955  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.272109  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:51.272310  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.272477  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.272642  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:51.272816  141076 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:51.273012  141076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I1009 18:39:51.273028  141076 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-916037 && echo "addons-916037" | sudo tee /etc/hostname
	I1009 18:39:51.395676  141076 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-916037
	
	I1009 18:39:51.395700  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:51.398999  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.399469  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:51.399499  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.399713  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:51.399937  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.400090  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.400228  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:51.400379  141076 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:51.400608  141076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I1009 18:39:51.400631  141076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-916037' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-916037/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-916037' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:39:51.514598  141076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:39:51.514641  141076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-136449/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-136449/.minikube}
	I1009 18:39:51.514669  141076 buildroot.go:174] setting up certificates
	I1009 18:39:51.514692  141076 provision.go:84] configureAuth start
	I1009 18:39:51.514712  141076 main.go:141] libmachine: (addons-916037) Calling .GetMachineName
	I1009 18:39:51.515012  141076 main.go:141] libmachine: (addons-916037) Calling .GetIP
	I1009 18:39:51.518351  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.518822  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:51.518856  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.519058  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:51.521635  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.521952  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:51.521974  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.522147  141076 provision.go:143] copyHostCerts
	I1009 18:39:51.522228  141076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem (1082 bytes)
	I1009 18:39:51.522372  141076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem (1123 bytes)
	I1009 18:39:51.522473  141076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem (1675 bytes)
	I1009 18:39:51.522549  141076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem org=jenkins.addons-916037 san=[127.0.0.1 192.168.39.158 addons-916037 localhost minikube]
	I1009 18:39:51.841382  141076 provision.go:177] copyRemoteCerts
	I1009 18:39:51.841441  141076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:39:51.841466  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:51.844271  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.844687  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:51.844716  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:51.844962  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:51.845134  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:51.845288  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:51.845408  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:39:51.932975  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:39:51.965542  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:39:51.994015  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:39:52.022280  141076 provision.go:87] duration metric: took 507.569437ms to configureAuth
	I1009 18:39:52.022308  141076 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:39:52.022487  141076 config.go:182] Loaded profile config "addons-916037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:39:52.022572  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:52.025438  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.025779  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.025804  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.026034  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:52.026216  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:52.026509  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:52.026702  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:52.026886  141076 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:52.027099  141076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I1009 18:39:52.027114  141076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:39:52.265648  141076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:39:52.265671  141076 main.go:141] libmachine: Checking connection to Docker...
	I1009 18:39:52.265681  141076 main.go:141] libmachine: (addons-916037) Calling .GetURL
	I1009 18:39:52.267105  141076 main.go:141] libmachine: (addons-916037) DBG | using libvirt version 8000000
	I1009 18:39:52.270117  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.270770  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.270802  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.271100  141076 main.go:141] libmachine: Docker is up and running!
	I1009 18:39:52.271114  141076 main.go:141] libmachine: Reticulating splines...
	I1009 18:39:52.271121  141076 client.go:171] duration metric: took 20.976598608s to LocalClient.Create
	I1009 18:39:52.271143  141076 start.go:168] duration metric: took 20.976662604s to libmachine.API.Create "addons-916037"
	I1009 18:39:52.271153  141076 start.go:294] postStartSetup for "addons-916037" (driver="kvm2")
	I1009 18:39:52.271161  141076 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:39:52.271178  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:52.271414  141076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:39:52.271441  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:52.273767  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.274092  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.274117  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.274275  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:52.274466  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:52.274627  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:52.274741  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:39:52.358676  141076 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:39:52.363908  141076 info.go:137] Remote host: Buildroot 2025.02
	I1009 18:39:52.363938  141076 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/addons for local assets ...
	I1009 18:39:52.364035  141076 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/files for local assets ...
	I1009 18:39:52.364069  141076 start.go:297] duration metric: took 92.90887ms for postStartSetup
	I1009 18:39:52.364112  141076 main.go:141] libmachine: (addons-916037) Calling .GetConfigRaw
	I1009 18:39:52.364749  141076 main.go:141] libmachine: (addons-916037) Calling .GetIP
	I1009 18:39:52.367780  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.368154  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.368190  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.368473  141076 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/config.json ...
	I1009 18:39:52.368720  141076 start.go:129] duration metric: took 21.090660639s to createHost
	I1009 18:39:52.368746  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:52.371682  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.372102  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.372126  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.372347  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:52.372534  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:52.372770  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:52.372959  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:52.373132  141076 main.go:141] libmachine: Using SSH client type: native
	I1009 18:39:52.373343  141076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.158 22 <nil> <nil>}
	I1009 18:39:52.373353  141076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:39:52.479005  141076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760035192.441209469
	
	I1009 18:39:52.479031  141076 fix.go:217] guest clock: 1760035192.441209469
	I1009 18:39:52.479038  141076 fix.go:230] Guest: 2025-10-09 18:39:52.441209469 +0000 UTC Remote: 2025-10-09 18:39:52.36873484 +0000 UTC m=+21.203387949 (delta=72.474629ms)
	I1009 18:39:52.479077  141076 fix.go:201] guest clock delta is within tolerance: 72.474629ms
	I1009 18:39:52.479082  141076 start.go:84] releasing machines lock for "addons-916037", held for 21.201097387s
	I1009 18:39:52.479107  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:52.479427  141076 main.go:141] libmachine: (addons-916037) Calling .GetIP
	I1009 18:39:52.482516  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.482891  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.482918  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.483079  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:52.483601  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:52.483800  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:39:52.483902  141076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:39:52.483950  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:52.483997  141076 ssh_runner.go:195] Run: cat /version.json
	I1009 18:39:52.484018  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:39:52.487287  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.487402  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.487758  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.487790  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.487868  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:52.487907  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:52.487984  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:52.488227  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:39:52.488249  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:52.488405  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:39:52.488417  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:52.488605  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:39:52.488604  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:39:52.488771  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:39:52.589776  141076 ssh_runner.go:195] Run: systemctl --version
	I1009 18:39:52.596258  141076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:39:52.753791  141076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:39:52.761278  141076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:39:52.761372  141076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:39:52.783142  141076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:39:52.783176  141076 start.go:496] detecting cgroup driver to use...
	I1009 18:39:52.783253  141076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:39:52.804390  141076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:39:52.822101  141076 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:39:52.822160  141076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:39:52.839552  141076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:39:52.856154  141076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:39:52.999631  141076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:39:53.217818  141076 docker.go:234] disabling docker service ...
	I1009 18:39:53.217901  141076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:39:53.234663  141076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:39:53.249986  141076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:39:53.408928  141076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:39:53.553543  141076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:39:53.569386  141076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:39:53.593498  141076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:39:53.593598  141076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:53.606160  141076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:39:53.606226  141076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:53.618794  141076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:53.630892  141076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:53.642784  141076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:39:53.655514  141076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:53.667797  141076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:53.688060  141076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:39:53.700084  141076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:39:53.709999  141076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:39:53.710046  141076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:39:53.731894  141076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:39:53.744334  141076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:39:53.889808  141076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:39:54.033885  141076 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:39:54.033999  141076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:39:54.040003  141076 start.go:564] Will wait 60s for crictl version
	I1009 18:39:54.040093  141076 ssh_runner.go:195] Run: which crictl
	I1009 18:39:54.044657  141076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:39:54.086929  141076 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:39:54.087064  141076 ssh_runner.go:195] Run: crio --version
	I1009 18:39:54.118644  141076 ssh_runner.go:195] Run: crio --version
	I1009 18:39:54.151296  141076 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 18:39:54.152453  141076 main.go:141] libmachine: (addons-916037) Calling .GetIP
	I1009 18:39:54.155226  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:54.155577  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:39:54.155607  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:39:54.155826  141076 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 18:39:54.160828  141076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:39:54.176404  141076 kubeadm.go:883] updating cluster {Name:addons-916037 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-916037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:39:54.176496  141076 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:39:54.176539  141076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:39:54.213582  141076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1009 18:39:54.213651  141076 ssh_runner.go:195] Run: which lz4
	I1009 18:39:54.218322  141076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 18:39:54.223420  141076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 18:39:54.223452  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1009 18:39:55.760916  141076 crio.go:462] duration metric: took 1.542619339s to copy over tarball
	I1009 18:39:55.761005  141076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 18:39:57.432291  141076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.671259532s)
	I1009 18:39:57.432327  141076 crio.go:469] duration metric: took 1.671378248s to extract the tarball
	I1009 18:39:57.432337  141076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 18:39:57.476019  141076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:39:57.524140  141076 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:39:57.524165  141076 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:39:57.524173  141076 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.34.1 crio true true} ...
	I1009 18:39:57.524293  141076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-916037 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-916037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:39:57.524382  141076 ssh_runner.go:195] Run: crio config
	I1009 18:39:57.574064  141076 cni.go:84] Creating CNI manager for ""
	I1009 18:39:57.574096  141076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:39:57.574122  141076 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:39:57.574154  141076 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-916037 NodeName:addons-916037 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:39:57.574294  141076 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.158
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-916037"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.158"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:39:57.574365  141076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:39:57.586813  141076 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:39:57.586895  141076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:39:57.598813  141076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1009 18:39:57.618855  141076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:39:57.639149  141076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1009 18:39:57.659972  141076 ssh_runner.go:195] Run: grep 192.168.39.158	control-plane.minikube.internal$ /etc/hosts
	I1009 18:39:57.664346  141076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:39:57.679417  141076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:39:57.826035  141076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:39:57.858142  141076 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037 for IP: 192.168.39.158
	I1009 18:39:57.858176  141076 certs.go:195] generating shared ca certs ...
	I1009 18:39:57.858199  141076 certs.go:227] acquiring lock for ca certs: {Name:mkad58f6533e9a5aa8b52ac28f20029620803fc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:57.858407  141076 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key
	I1009 18:39:58.047407  141076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt ...
	I1009 18:39:58.047435  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt: {Name:mk0efcf89745555ea8abf5d44e7d2c3a739ee35f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:58.047666  141076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key ...
	I1009 18:39:58.047685  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key: {Name:mka808e6e5b04487d05ea4dc1ccc4d76d5954dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:58.047803  141076 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key
	I1009 18:39:58.574244  141076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.crt ...
	I1009 18:39:58.574272  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.crt: {Name:mkd9438a7831b295fc7e621db0555b17bf86ce52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:58.574462  141076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key ...
	I1009 18:39:58.574502  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key: {Name:mk4fb55e1be6a3bb7d1e441010b80982e67ab53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:58.574634  141076 certs.go:257] generating profile certs ...
	I1009 18:39:58.574691  141076 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.key
	I1009 18:39:58.574713  141076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt with IP's: []
	I1009 18:39:58.708790  141076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt ...
	I1009 18:39:58.708818  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: {Name:mk06545abb064aca40dcacfba4de4d1354138dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:58.709002  141076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.key ...
	I1009 18:39:58.709025  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.key: {Name:mk453c3ef3c5273806135599b8e2c6d5d81aa579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:58.709129  141076 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.key.6d239d97
	I1009 18:39:58.709150  141076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.crt.6d239d97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
	I1009 18:39:59.149436  141076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.crt.6d239d97 ...
	I1009 18:39:59.149471  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.crt.6d239d97: {Name:mkb3e580d58458cf6a032c50a08ec91fb4914125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:59.149681  141076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.key.6d239d97 ...
	I1009 18:39:59.149701  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.key.6d239d97: {Name:mk674d224eb45bb87e8807478ed1dcd32afc5c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:59.149841  141076 certs.go:382] copying /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.crt.6d239d97 -> /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.crt
	I1009 18:39:59.149948  141076 certs.go:386] copying /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.key.6d239d97 -> /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.key
	I1009 18:39:59.150009  141076 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.key
	I1009 18:39:59.150028  141076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.crt with IP's: []
	I1009 18:39:59.362540  141076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.crt ...
	I1009 18:39:59.362575  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.crt: {Name:mkfa07c59350680dc9e90cd48b9da8c8473d17d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:59.362751  141076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.key ...
	I1009 18:39:59.362769  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.key: {Name:mk68013267d3a512b9f665217a631ea31c47491e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:59.362967  141076 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:39:59.363010  141076 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:39:59.363036  141076 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:39:59.363060  141076 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem (1675 bytes)
	I1009 18:39:59.363627  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:39:59.395515  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:39:59.431716  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:39:59.479070  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:39:59.520597  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:39:59.549875  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:39:59.580538  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:39:59.612669  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:39:59.643954  141076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:39:59.675750  141076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:39:59.697409  141076 ssh_runner.go:195] Run: openssl version
	I1009 18:39:59.704248  141076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:39:59.718350  141076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:39:59.723688  141076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:39:59.723752  141076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:39:59.731385  141076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:39:59.744488  141076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:39:59.749597  141076 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:39:59.749662  141076 kubeadm.go:400] StartCluster: {Name:addons-916037 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-916037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:39:59.749737  141076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:39:59.749825  141076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:39:59.789983  141076 cri.go:89] found id: ""
	I1009 18:39:59.790066  141076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:39:59.802468  141076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:39:59.814137  141076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:39:59.825548  141076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:39:59.825578  141076 kubeadm.go:157] found existing configuration files:
	
	I1009 18:39:59.825629  141076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:39:59.836723  141076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:39:59.836787  141076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:39:59.847678  141076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:39:59.858168  141076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:39:59.858222  141076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:39:59.869102  141076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:39:59.879826  141076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:39:59.879876  141076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:39:59.891181  141076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:39:59.901913  141076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:39:59.901960  141076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:39:59.913175  141076 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 18:40:00.064618  141076 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:40:13.771377  141076 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:40:13.771463  141076 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:40:13.771565  141076 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:40:13.771686  141076 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:40:13.771855  141076 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:40:13.771948  141076 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:40:13.773842  141076 out.go:252]   - Generating certificates and keys ...
	I1009 18:40:13.773950  141076 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:40:13.774044  141076 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:40:13.774166  141076 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:40:13.774266  141076 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:40:13.774372  141076 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:40:13.774448  141076 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:40:13.774529  141076 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:40:13.774739  141076 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-916037 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I1009 18:40:13.774826  141076 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:40:13.774969  141076 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-916037 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
	I1009 18:40:13.775066  141076 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:40:13.775176  141076 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:40:13.775238  141076 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:40:13.775328  141076 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:40:13.775384  141076 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:40:13.775431  141076 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:40:13.775477  141076 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:40:13.775532  141076 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:40:13.775612  141076 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:40:13.775689  141076 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:40:13.775780  141076 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:40:13.777414  141076 out.go:252]   - Booting up control plane ...
	I1009 18:40:13.777576  141076 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:40:13.777698  141076 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:40:13.777800  141076 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:40:13.777963  141076 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:40:13.778063  141076 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:40:13.778152  141076 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:40:13.778232  141076 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:40:13.778267  141076 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:40:13.778434  141076 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:40:13.778608  141076 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:40:13.778716  141076 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501530417s
	I1009 18:40:13.778838  141076 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:40:13.778935  141076 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.158:8443/livez
	I1009 18:40:13.779069  141076 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:40:13.779169  141076 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:40:13.779267  141076 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.211860445s
	I1009 18:40:13.779353  141076 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.884495697s
	I1009 18:40:13.779468  141076 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00239732s
	I1009 18:40:13.779670  141076 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:40:13.779876  141076 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:40:13.779984  141076 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:40:13.780139  141076 kubeadm.go:318] [mark-control-plane] Marking the node addons-916037 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:40:13.780194  141076 kubeadm.go:318] [bootstrap-token] Using token: a0shgr.okh0juewz5lqg3to
	I1009 18:40:13.781942  141076 out.go:252]   - Configuring RBAC rules ...
	I1009 18:40:13.782067  141076 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:40:13.782145  141076 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:40:13.782354  141076 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:40:13.782523  141076 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:40:13.782714  141076 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:40:13.782811  141076 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:40:13.782928  141076 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:40:13.782971  141076 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1009 18:40:13.783034  141076 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1009 18:40:13.783045  141076 kubeadm.go:318] 
	I1009 18:40:13.783128  141076 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1009 18:40:13.783137  141076 kubeadm.go:318] 
	I1009 18:40:13.783330  141076 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1009 18:40:13.783346  141076 kubeadm.go:318] 
	I1009 18:40:13.783385  141076 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1009 18:40:13.783443  141076 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:40:13.783494  141076 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:40:13.783501  141076 kubeadm.go:318] 
	I1009 18:40:13.783546  141076 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1009 18:40:13.783552  141076 kubeadm.go:318] 
	I1009 18:40:13.783609  141076 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:40:13.783616  141076 kubeadm.go:318] 
	I1009 18:40:13.783663  141076 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1009 18:40:13.783729  141076 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:40:13.783791  141076 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:40:13.783797  141076 kubeadm.go:318] 
	I1009 18:40:13.783886  141076 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:40:13.783969  141076 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1009 18:40:13.783976  141076 kubeadm.go:318] 
	I1009 18:40:13.784053  141076 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token a0shgr.okh0juewz5lqg3to \
	I1009 18:40:13.784152  141076 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b154372d7a3324df2226fe4f135682bb984efe7fcf143baa3bcef25ec04dbb6b \
	I1009 18:40:13.784175  141076 kubeadm.go:318] 	--control-plane 
	I1009 18:40:13.784184  141076 kubeadm.go:318] 
	I1009 18:40:13.784266  141076 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:40:13.784275  141076 kubeadm.go:318] 
	I1009 18:40:13.784347  141076 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token a0shgr.okh0juewz5lqg3to \
	I1009 18:40:13.784463  141076 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b154372d7a3324df2226fe4f135682bb984efe7fcf143baa3bcef25ec04dbb6b 
	I1009 18:40:13.784479  141076 cni.go:84] Creating CNI manager for ""
	I1009 18:40:13.784485  141076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:40:13.786317  141076 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 18:40:13.787629  141076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 18:40:13.802294  141076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 18:40:13.842181  141076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:40:13.842311  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:13.842380  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-916037 minikube.k8s.io/updated_at=2025_10_09T18_40_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb minikube.k8s.io/name=addons-916037 minikube.k8s.io/primary=true
	I1009 18:40:14.006929  141076 ops.go:34] apiserver oom_adj: -16
	I1009 18:40:14.006935  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:14.507332  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:15.007463  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:15.507791  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:16.007725  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:16.507134  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:17.007141  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:17.507721  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:18.007552  141076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:40:18.110515  141076 kubeadm.go:1113] duration metric: took 4.268280141s to wait for elevateKubeSystemPrivileges
	I1009 18:40:18.110576  141076 kubeadm.go:402] duration metric: took 18.360919032s to StartCluster
	I1009 18:40:18.110603  141076 settings.go:142] acquiring lock: {Name:mk9b9e0b3207d052c253a9ce8599048f2fcb59d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:40:18.110740  141076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 18:40:18.111179  141076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/kubeconfig: {Name:mk0cc9985a025be104fc679cfaab9640e2d88e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:40:18.111421  141076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:40:18.111443  141076 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:40:18.111551  141076 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:40:18.111736  141076 addons.go:69] Setting yakd=true in profile "addons-916037"
	I1009 18:40:18.111769  141076 addons.go:238] Setting addon yakd=true in "addons-916037"
	I1009 18:40:18.111792  141076 addons.go:69] Setting inspektor-gadget=true in profile "addons-916037"
	I1009 18:40:18.111822  141076 addons.go:69] Setting storage-provisioner=true in profile "addons-916037"
	I1009 18:40:18.111850  141076 addons.go:238] Setting addon storage-provisioner=true in "addons-916037"
	I1009 18:40:18.111858  141076 addons.go:69] Setting gcp-auth=true in profile "addons-916037"
	I1009 18:40:18.111872  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.111884  141076 mustload.go:65] Loading cluster: addons-916037
	I1009 18:40:18.111867  141076 addons.go:69] Setting default-storageclass=true in profile "addons-916037"
	I1009 18:40:18.111917  141076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-916037"
	I1009 18:40:18.111913  141076 addons.go:69] Setting registry-creds=true in profile "addons-916037"
	I1009 18:40:18.111967  141076 addons.go:238] Setting addon registry-creds=true in "addons-916037"
	I1009 18:40:18.112033  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.112057  141076 config.go:182] Loaded profile config "addons-916037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:18.112011  141076 addons.go:69] Setting volcano=true in profile "addons-916037"
	I1009 18:40:18.112103  141076 addons.go:238] Setting addon volcano=true in "addons-916037"
	I1009 18:40:18.112165  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.112210  141076 addons.go:69] Setting volumesnapshots=true in profile "addons-916037"
	I1009 18:40:18.112221  141076 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-916037"
	I1009 18:40:18.112229  141076 addons.go:238] Setting addon volumesnapshots=true in "addons-916037"
	I1009 18:40:18.112273  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.112275  141076 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-916037"
	I1009 18:40:18.112307  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.112448  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.112459  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.112491  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112524  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112540  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.112596  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112696  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.111850  141076 addons.go:238] Setting addon inspektor-gadget=true in "addons-916037"
	I1009 18:40:18.112732  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112181  141076 addons.go:69] Setting ingress-dns=true in profile "addons-916037"
	I1009 18:40:18.112746  141076 addons.go:238] Setting addon ingress-dns=true in "addons-916037"
	I1009 18:40:18.112752  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.112770  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.112782  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112175  141076 addons.go:69] Setting ingress=true in profile "addons-916037"
	I1009 18:40:18.112826  141076 addons.go:238] Setting addon ingress=true in "addons-916037"
	I1009 18:40:18.112195  141076 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-916037"
	I1009 18:40:18.112847  141076 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-916037"
	I1009 18:40:18.112870  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.112733  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.113248  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.113268  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112197  141076 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-916037"
	I1009 18:40:18.113331  141076 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-916037"
	I1009 18:40:18.113419  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.113465  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.113578  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.113602  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.113818  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.113873  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.113909  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.113942  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112696  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.114105  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.114215  141076 out.go:179] * Verifying Kubernetes components...
	I1009 18:40:18.112157  141076 config.go:182] Loaded profile config "addons-916037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:18.112202  141076 addons.go:69] Setting metrics-server=true in profile "addons-916037"
	I1009 18:40:18.114342  141076 addons.go:238] Setting addon metrics-server=true in "addons-916037"
	I1009 18:40:18.114369  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.114628  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.114650  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112214  141076 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-916037"
	I1009 18:40:18.114751  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.114765  141076 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-916037"
	I1009 18:40:18.114771  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.113133  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.115604  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.112191  141076 addons.go:69] Setting cloud-spanner=true in profile "addons-916037"
	I1009 18:40:18.117392  141076 addons.go:238] Setting addon cloud-spanner=true in "addons-916037"
	I1009 18:40:18.117426  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.117874  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.111811  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.113148  141076 addons.go:69] Setting registry=true in profile "addons-916037"
	I1009 18:40:18.121740  141076 addons.go:238] Setting addon registry=true in "addons-916037"
	I1009 18:40:18.121786  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.122479  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.122525  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.122671  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.123230  141076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:40:18.123303  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.123341  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.130553  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.130618  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.138676  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I1009 18:40:18.140300  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46865
	I1009 18:40:18.142131  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.142960  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I1009 18:40:18.143515  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38957
	I1009 18:40:18.144020  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.144552  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.144586  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.145109  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.145915  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.145941  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.146837  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.146928  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.146942  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.146950  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.150672  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.150765  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.150808  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.150923  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.150944  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.151389  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.151437  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.151637  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.151693  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.151830  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.158092  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1009 18:40:18.158107  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36971
	I1009 18:40:18.158284  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I1009 18:40:18.158622  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.159478  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.159333  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.159379  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.160137  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.160154  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.161670  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.162260  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.162297  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.162366  141076 addons.go:238] Setting addon default-storageclass=true in "addons-916037"
	I1009 18:40:18.162404  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.162796  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.162812  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.162875  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.162817  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.163397  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.164654  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.164703  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.167195  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45801
	I1009 18:40:18.171054  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.171193  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45729
	I1009 18:40:18.171359  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I1009 18:40:18.171473  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.172919  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.172942  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.173087  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.173101  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.173176  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.173496  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.173664  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.173678  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.174165  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.174204  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.174839  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.175648  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.175692  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.175893  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.176017  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.176119  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.176759  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.176775  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.177179  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.182738  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I1009 18:40:18.182931  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.183369  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.183414  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.183955  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I1009 18:40:18.184119  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I1009 18:40:18.184661  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.184705  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.185110  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I1009 18:40:18.196196  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.197413  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I1009 18:40:18.197688  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45361
	I1009 18:40:18.197714  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.197906  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.197932  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.198287  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I1009 18:40:18.198535  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.198645  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.201862  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.199816  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I1009 18:40:18.205115  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.205195  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.205199  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.205219  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.205364  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.205411  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.205423  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.206810  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45621
	I1009 18:40:18.206983  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I1009 18:40:18.207064  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.207150  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.207247  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.207258  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.207307  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I1009 18:40:18.208137  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.208715  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.208757  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.209005  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.209017  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.209073  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.209593  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.209693  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.209740  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.209763  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39473
	I1009 18:40:18.210383  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.210635  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.210671  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.210691  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.210874  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.211106  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.211764  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.212165  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.212435  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.212772  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.212823  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.213021  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.213210  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.213220  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.213231  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.213236  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.213522  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.213688  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.213709  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.213765  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.214303  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.214374  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.214772  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.215298  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.215321  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.215375  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.217120  141076 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1009 18:40:18.218140  141076 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:40:18.218161  141076 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1009 18:40:18.218185  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.218681  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.218979  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.219279  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.219341  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.220334  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.220625  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.220625  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.220948  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.220968  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.221004  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.221042  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.221087  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.221237  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:40:18.221404  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.221977  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.222055  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.224649  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I1009 18:40:18.224993  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.225620  141076 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-916037"
	I1009 18:40:18.225671  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:18.225862  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.225884  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.225917  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:40:18.226067  141076 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:40:18.226081  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.226084  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.226127  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.226262  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.226384  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.226499  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.226965  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.227019  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.228119  141076 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:40:18.228138  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:40:18.228157  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.229016  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:40:18.230154  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:40:18.231133  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:40:18.231261  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.231423  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45661
	I1009 18:40:18.231744  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45437
	I1009 18:40:18.232348  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.232830  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.232852  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:40:18.232894  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.232995  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.233013  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.233294  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.233314  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.233572  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.233794  141076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:40:18.233809  141076 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:40:18.233830  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.234387  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.234437  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.234643  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.234720  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.235180  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.235386  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:40:18.235581  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.235782  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.235921  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.235957  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.236087  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.236684  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.236703  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.238822  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.239083  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.239401  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:40:18.239807  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I1009 18:40:18.241486  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.242960  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.243068  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.243521  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.243787  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.245691  141076 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:40:18.246649  141076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:40:18.246669  141076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:40:18.246694  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.246832  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.249626  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.250276  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I1009 18:40:18.251894  141076 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1009 18:40:18.252626  141076 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:40:18.253352  141076 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 18:40:18.253432  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1009 18:40:18.253456  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.253880  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.253993  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I1009 18:40:18.254692  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.254711  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.254794  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.254997  141076 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:40:18.255014  141076 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:40:18.255045  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.255410  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.255894  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.255942  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.256414  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.256630  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.256644  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.256865  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34365
	I1009 18:40:18.257120  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.257569  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.258208  141076 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1009 18:40:18.258314  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.258469  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.258626  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.258845  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.259028  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.259103  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.259256  141076 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:40:18.259333  141076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:40:18.259278  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.260809  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.260827  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.259613  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.263247  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.263334  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.264003  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:18.264049  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:18.264594  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.265769  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40805
	I1009 18:40:18.266042  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.266814  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.267488  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.267508  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.267505  141076 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1009 18:40:18.269087  141076 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:40:18.269103  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:40:18.269225  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.270268  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.270348  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.270406  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.270419  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.270457  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I1009 18:40:18.270820  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.270862  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.271066  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.271194  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I1009 18:40:18.271238  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.271288  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.271419  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.272211  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.272331  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.272433  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.272791  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.273052  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.273077  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.273152  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.273215  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I1009 18:40:18.273328  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.273432  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.273546  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.273618  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.273680  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.273815  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.274179  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.274596  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.274657  141076 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1009 18:40:18.274843  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.274854  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I1009 18:40:18.274940  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.275145  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.275369  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.276046  141076 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:40:18.276069  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1009 18:40:18.276103  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.276129  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.276155  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.276185  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.276191  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.276205  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.276623  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.276783  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.276798  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.276860  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.277168  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.277491  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.277751  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.278216  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.278581  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.280208  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.280935  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.281043  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.282246  141076 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1009 18:40:18.282423  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.282505  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.282886  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.283102  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.283239  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.283257  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.283648  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.283762  141076 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 18:40:18.283778  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1009 18:40:18.283797  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.283891  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.283942  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.284150  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.284360  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.284726  141076 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1009 18:40:18.284504  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.284330  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.285349  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.285547  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.285682  141076 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:40:18.285694  141076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:40:18.285710  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.285963  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.285984  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37379
	I1009 18:40:18.286257  141076 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:40:18.286269  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:40:18.286391  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.286520  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.286934  141076 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1009 18:40:18.286990  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.287387  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.287651  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.287650  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.288049  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.288068  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.288398  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.288509  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.288612  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.288778  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.288954  141076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:40:18.289590  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37131
	I1009 18:40:18.290059  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:18.290746  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:18.290769  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:18.291132  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:18.291322  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:18.291884  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.292473  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.292688  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.292719  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.293152  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.293321  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.293393  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:18.293408  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:18.293410  141076 out.go:179]   - Using image docker.io/registry:3.0.0
	I1009 18:40:18.293632  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.293662  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.293677  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:18.293692  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:18.293701  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:18.293707  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:18.293882  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.293972  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:18.294017  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.294016  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:18.294036  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:18.294059  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	W1009 18:40:18.294119  141076 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:40:18.294156  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.294300  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.294605  141076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:40:18.294703  141076 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:40:18.294706  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:18.294714  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:40:18.294730  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.294770  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.294919  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.295010  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.295545  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.295592  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.295757  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.295840  141076 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:40:18.296000  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.296216  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.296380  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.296776  141076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1009 18:40:18.298129  141076 out.go:179]   - Using image docker.io/busybox:stable
	I1009 18:40:18.298304  141076 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:40:18.298324  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:40:18.298342  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.298432  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.298999  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.299035  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.299133  141076 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:40:18.299154  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:40:18.299160  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.299175  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:18.299327  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.299487  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.299758  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.302908  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.303409  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.303429  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.303451  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.303750  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.303995  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.304079  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:18.304106  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:18.304206  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.304319  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:18.304356  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:18.304515  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:18.304704  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:18.304931  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	W1009 18:40:18.388016  141076 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57292->192.168.39.158:22: read: connection reset by peer
	I1009 18:40:18.388083  141076 retry.go:31] will retry after 187.896501ms: ssh: handshake failed: read tcp 192.168.39.1:57292->192.168.39.158:22: read: connection reset by peer
	I1009 18:40:18.820051  141076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:40:18.820132  141076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:40:18.877669  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1009 18:40:19.211841  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:40:19.308475  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:40:19.315095  141076 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:19.315128  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1009 18:40:19.328624  141076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:40:19.328658  141076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:40:19.337994  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1009 18:40:19.420464  141076 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:40:19.420502  141076 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:40:19.437960  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:40:19.472226  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:40:19.479194  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:40:19.482690  141076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:40:19.482715  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:40:19.487257  141076 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:40:19.487286  141076 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:40:19.533914  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:40:19.630174  141076 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:40:19.630222  141076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:40:19.674596  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:40:20.083147  141076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:40:20.083186  141076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:40:20.088236  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:20.145744  141076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:40:20.145791  141076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:40:20.308692  141076 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:40:20.308724  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:40:20.346013  141076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:40:20.346047  141076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:40:20.384715  141076 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:40:20.384752  141076 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:40:20.798534  141076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:40:20.798583  141076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:40:20.799068  141076 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:40:20.799092  141076 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:40:20.828370  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:40:20.937264  141076 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:40:20.937296  141076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:40:20.949474  141076 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:40:20.949505  141076 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:40:21.087015  141076 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:40:21.087055  141076 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:40:21.178469  141076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:40:21.178501  141076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:40:21.292523  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:40:21.377789  141076 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:40:21.377818  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:40:21.529487  141076 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:40:21.529517  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:40:21.658484  141076 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:40:21.658517  141076 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:40:21.870338  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:40:21.982200  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:40:22.143320  141076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:40:22.143347  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:40:22.646852  141076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:40:22.646918  141076 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:40:22.730745  141076 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.910563574s)
	I1009 18:40:22.730772  141076 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.910685376s)
	I1009 18:40:22.730786  141076 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 18:40:22.730866  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.853158058s)
	I1009 18:40:22.730928  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:22.730924  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.519046833s)
	I1009 18:40:22.730948  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:22.730965  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:22.730980  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:22.731268  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:22.731288  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:22.731299  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:22.731307  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:22.731503  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:22.731510  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:22.731515  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:22.731525  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:22.731533  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:22.731571  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:22.731580  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:22.731928  141076 node_ready.go:35] waiting up to 6m0s for node "addons-916037" to be "Ready" ...
	I1009 18:40:22.732113  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:22.732136  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:22.732151  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:22.744123  141076 node_ready.go:49] node "addons-916037" is "Ready"
	I1009 18:40:22.744155  141076 node_ready.go:38] duration metric: took 12.202053ms for node "addons-916037" to be "Ready" ...
	I1009 18:40:22.744169  141076 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:40:22.744226  141076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:40:22.759512  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:22.759535  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:22.759841  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:22.759880  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:22.759891  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:23.240901  141076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:40:23.240936  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:40:23.275937  141076 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-916037" context rescaled to 1 replicas
	I1009 18:40:23.960602  141076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:40:23.960632  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:40:24.322762  141076 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:40:24.322794  141076 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:40:24.659467  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:40:25.667074  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.358559444s)
	I1009 18:40:25.667120  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.329089321s)
	I1009 18:40:25.667151  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.229157678s)
	I1009 18:40:25.667128  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:25.667175  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:25.667158  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:25.667199  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:25.667181  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:25.667245  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:25.667481  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:25.667496  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:25.667504  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:25.667511  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:25.667672  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:25.667684  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:25.667692  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:25.667698  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:25.667775  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:25.667799  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:25.667825  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:25.667824  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:25.667832  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:25.667843  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:25.667844  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:25.667884  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:25.668190  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:25.668201  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:25.668278  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:25.668303  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:25.668309  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:25.670732  141076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:40:25.670760  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:25.674191  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:25.674693  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:25.674722  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:25.674945  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:25.675141  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:25.675294  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:25.675436  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:26.410954  141076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:40:26.858735  141076 addons.go:238] Setting addon gcp-auth=true in "addons-916037"
	I1009 18:40:26.858790  141076 host.go:66] Checking if "addons-916037" exists ...
	I1009 18:40:26.859074  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:26.859101  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:26.873369  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I1009 18:40:26.873978  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:26.874480  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:26.874508  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:26.874873  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:26.875371  141076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:40:26.875408  141076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:40:26.889293  141076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34049
	I1009 18:40:26.889840  141076 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:40:26.890250  141076 main.go:141] libmachine: Using API Version  1
	I1009 18:40:26.890273  141076 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:40:26.890620  141076 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:40:26.890803  141076 main.go:141] libmachine: (addons-916037) Calling .GetState
	I1009 18:40:26.892737  141076 main.go:141] libmachine: (addons-916037) Calling .DriverName
	I1009 18:40:26.892970  141076 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:40:26.892992  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHHostname
	I1009 18:40:26.896302  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:26.896808  141076 main.go:141] libmachine: (addons-916037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:3a:a5", ip: ""} in network mk-addons-916037: {Iface:virbr1 ExpiryTime:2025-10-09 19:39:48 +0000 UTC Type:0 Mac:52:54:00:8f:3a:a5 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-916037 Clientid:01:52:54:00:8f:3a:a5}
	I1009 18:40:26.896842  141076 main.go:141] libmachine: (addons-916037) DBG | domain addons-916037 has defined IP address 192.168.39.158 and MAC address 52:54:00:8f:3a:a5 in network mk-addons-916037
	I1009 18:40:26.897015  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHPort
	I1009 18:40:26.897225  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHKeyPath
	I1009 18:40:26.897348  141076 main.go:141] libmachine: (addons-916037) Calling .GetSSHUsername
	I1009 18:40:26.897481  141076 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/addons-916037/id_rsa Username:docker}
	I1009 18:40:27.702865  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.223628366s)
	I1009 18:40:27.702916  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.702917  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.16895501s)
	I1009 18:40:27.702959  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.702969  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.702924  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.702973  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.028348207s)
	I1009 18:40:27.703069  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.614811457s)
	I1009 18:40:27.703077  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703086  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	W1009 18:40:27.703092  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:27.703125  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.87469695s)
	I1009 18:40:27.703133  141076 retry.go:31] will retry after 354.635907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:27.703147  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703156  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703207  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.410648218s)
	I1009 18:40:27.703223  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.703228  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703233  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.703238  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703242  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703249  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703261  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.832881048s)
	I1009 18:40:27.703292  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703304  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703406  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.703465  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.703477  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.703484  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703491  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703495  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.703544  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.703698  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.703714  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.703725  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703732  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703756  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.703767  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.703546  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.703582  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.703803  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.703811  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.703817  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703953  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.703974  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.703981  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.705629  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.705656  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.705663  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.705671  141076 addons.go:479] Verifying addon metrics-server=true in "addons-916037"
	I1009 18:40:27.705717  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.705738  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.705749  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.705756  141076 addons.go:479] Verifying addon registry=true in "addons-916037"
	I1009 18:40:27.703603  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.703601  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.705897  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.705906  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.705913  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.703624  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.705959  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.705967  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.705973  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.706399  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.706414  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.706706  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.706813  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.706822  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.707774  141076 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-916037 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:40:27.707795  141076 out.go:179] * Verifying registry addon...
	I1009 18:40:27.709317  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.237059075s)
	I1009 18:40:27.709359  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.709372  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.709584  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.709598  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.709606  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.709620  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.709717  141076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:40:27.709836  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.709852  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:27.709863  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.709870  141076 addons.go:479] Verifying addon ingress=true in "addons-916037"
	I1009 18:40:27.710962  141076 out.go:179] * Verifying ingress addon...
	I1009 18:40:27.712483  141076 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:40:27.790274  141076 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:40:27.790299  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:27.790372  141076 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:40:27.790389  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:27.902930  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:27.902964  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:27.903450  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:27.903457  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:27.903488  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:28.058824  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:28.229593  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:28.233070  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:28.309947  141076 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.565694891s)
	I1009 18:40:28.309983  141076 api_server.go:72] duration metric: took 10.198503688s to wait for apiserver process to appear ...
	I1009 18:40:28.309990  141076 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:40:28.310006  141076 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
	I1009 18:40:28.310422  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.328174998s)
	W1009 18:40:28.310463  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:40:28.310496  141076 retry.go:31] will retry after 179.559557ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:40:28.329041  141076 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
	ok
	I1009 18:40:28.338612  141076 api_server.go:141] control plane version: v1.34.1
	I1009 18:40:28.338645  141076 api_server.go:131] duration metric: took 28.648301ms to wait for apiserver health ...
	I1009 18:40:28.338657  141076 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:40:28.357919  141076 system_pods.go:59] 16 kube-system pods found
	I1009 18:40:28.357962  141076 system_pods.go:61] "amd-gpu-device-plugin-67vlm" [3df5d7ee-6455-465f-ad7b-b2a61ede0c07] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1009 18:40:28.357970  141076 system_pods.go:61] "coredns-66bc5c9577-2jlms" [09335a2a-32d9-4024-84a4-8f1cb45d446d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:40:28.357977  141076 system_pods.go:61] "coredns-66bc5c9577-9qg6w" [5ca638cc-b670-4c22-881b-96987cd0ee8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:40:28.357982  141076 system_pods.go:61] "etcd-addons-916037" [ff2109c4-f041-469e-8eb1-8df09bbcbfec] Running
	I1009 18:40:28.357986  141076 system_pods.go:61] "kube-apiserver-addons-916037" [5b975d14-bfa3-48e8-a813-f6fc34806d42] Running
	I1009 18:40:28.357989  141076 system_pods.go:61] "kube-controller-manager-addons-916037" [8de1de6d-0baf-4f29-812b-228ed302122c] Running
	I1009 18:40:28.357993  141076 system_pods.go:61] "kube-ingress-dns-minikube" [b8644093-2bd9-4e37-ba1a-ac4506191dfb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:40:28.357997  141076 system_pods.go:61] "kube-proxy-nfbpj" [c2c65154-0654-4cc6-9e3b-5c92042bd666] Running
	I1009 18:40:28.358002  141076 system_pods.go:61] "kube-scheduler-addons-916037" [3ce93154-0012-44d1-9264-1d7a2621a163] Running
	I1009 18:40:28.358011  141076 system_pods.go:61] "metrics-server-85b7d694d7-n5phl" [f6523583-4c8f-41f1-93b2-8dc87efbe5d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:40:28.358019  141076 system_pods.go:61] "nvidia-device-plugin-daemonset-qknj6" [fe6c083c-73c5-4674-8b30-d26ef48988f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:40:28.358025  141076 system_pods.go:61] "registry-66898fdd98-mhqxq" [f05566f3-9afa-47e6-9fc5-7a69a6a0fc84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:40:28.358031  141076 system_pods.go:61] "registry-creds-764b6fb674-vnbmd" [9dadf4fb-2183-4f43-ac3d-e06056f0fffc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:40:28.358036  141076 system_pods.go:61] "registry-proxy-d2m77" [1bc08f6d-dc9c-42fd-a0a6-ce0dcf5e0cbb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:40:28.358039  141076 system_pods.go:61] "snapshot-controller-7d9fbc56b8-px2fq" [bebe1ef3-2c08-45cf-a876-2dfbff20fa55] Pending
	I1009 18:40:28.358046  141076 system_pods.go:61] "storage-provisioner" [ef2884fb-ec42-4ca4-954f-7c3e20f304a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:40:28.358051  141076 system_pods.go:74] duration metric: took 19.388533ms to wait for pod list to return data ...
	I1009 18:40:28.358061  141076 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:40:28.374898  141076 default_sa.go:45] found service account: "default"
	I1009 18:40:28.374935  141076 default_sa.go:55] duration metric: took 16.866584ms for default service account to be created ...
	I1009 18:40:28.374947  141076 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:40:28.398449  141076 system_pods.go:86] 17 kube-system pods found
	I1009 18:40:28.398483  141076 system_pods.go:89] "amd-gpu-device-plugin-67vlm" [3df5d7ee-6455-465f-ad7b-b2a61ede0c07] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1009 18:40:28.398490  141076 system_pods.go:89] "coredns-66bc5c9577-2jlms" [09335a2a-32d9-4024-84a4-8f1cb45d446d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:40:28.398498  141076 system_pods.go:89] "coredns-66bc5c9577-9qg6w" [5ca638cc-b670-4c22-881b-96987cd0ee8c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 18:40:28.398501  141076 system_pods.go:89] "etcd-addons-916037" [ff2109c4-f041-469e-8eb1-8df09bbcbfec] Running
	I1009 18:40:28.398505  141076 system_pods.go:89] "kube-apiserver-addons-916037" [5b975d14-bfa3-48e8-a813-f6fc34806d42] Running
	I1009 18:40:28.398509  141076 system_pods.go:89] "kube-controller-manager-addons-916037" [8de1de6d-0baf-4f29-812b-228ed302122c] Running
	I1009 18:40:28.398515  141076 system_pods.go:89] "kube-ingress-dns-minikube" [b8644093-2bd9-4e37-ba1a-ac4506191dfb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:40:28.398519  141076 system_pods.go:89] "kube-proxy-nfbpj" [c2c65154-0654-4cc6-9e3b-5c92042bd666] Running
	I1009 18:40:28.398523  141076 system_pods.go:89] "kube-scheduler-addons-916037" [3ce93154-0012-44d1-9264-1d7a2621a163] Running
	I1009 18:40:28.398528  141076 system_pods.go:89] "metrics-server-85b7d694d7-n5phl" [f6523583-4c8f-41f1-93b2-8dc87efbe5d4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:40:28.398533  141076 system_pods.go:89] "nvidia-device-plugin-daemonset-qknj6" [fe6c083c-73c5-4674-8b30-d26ef48988f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:40:28.398538  141076 system_pods.go:89] "registry-66898fdd98-mhqxq" [f05566f3-9afa-47e6-9fc5-7a69a6a0fc84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:40:28.398545  141076 system_pods.go:89] "registry-creds-764b6fb674-vnbmd" [9dadf4fb-2183-4f43-ac3d-e06056f0fffc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1009 18:40:28.398551  141076 system_pods.go:89] "registry-proxy-d2m77" [1bc08f6d-dc9c-42fd-a0a6-ce0dcf5e0cbb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:40:28.398570  141076 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jk69r" [684c78b7-ee1e-4fbf-9ddc-49276e7e599e] Pending
	I1009 18:40:28.398582  141076 system_pods.go:89] "snapshot-controller-7d9fbc56b8-px2fq" [bebe1ef3-2c08-45cf-a876-2dfbff20fa55] Pending
	I1009 18:40:28.398588  141076 system_pods.go:89] "storage-provisioner" [ef2884fb-ec42-4ca4-954f-7c3e20f304a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 18:40:28.398597  141076 system_pods.go:126] duration metric: took 23.642992ms to wait for k8s-apps to be running ...
	I1009 18:40:28.398612  141076 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:40:28.398664  141076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:40:28.491034  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:40:28.724183  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:28.725339  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:29.222900  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:29.223126  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:29.522130  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.862603047s)
	I1009 18:40:29.522150  141076 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.629158339s)
	I1009 18:40:29.522199  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:29.522225  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:29.522667  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:29.522686  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:29.522721  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:29.522731  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:29.522936  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:29.522955  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:29.522966  141076 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-916037"
	I1009 18:40:29.523861  141076 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1009 18:40:29.525584  141076 out.go:179] * Verifying csi-hostpath-driver addon...
	I1009 18:40:29.526892  141076 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1009 18:40:29.527533  141076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:40:29.527951  141076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:40:29.527970  141076 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:40:29.566448  141076 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:40:29.566477  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:29.676376  141076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:40:29.676411  141076 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:40:29.726639  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:29.727032  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:29.752123  141076 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:40:29.752158  141076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:40:29.909441  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:40:30.036234  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:30.218331  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:30.221610  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:30.536709  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:30.717946  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:30.718530  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:31.035131  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:31.218140  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:31.218504  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:31.540297  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:31.674391  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.615522048s)
	I1009 18:40:31.674417  141076 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.27572288s)
	I1009 18:40:31.674444  141076 system_svc.go:56] duration metric: took 3.275828404s WaitForService to wait for kubelet
	W1009 18:40:31.674461  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:31.674455  141076 kubeadm.go:586] duration metric: took 13.562973839s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:40:31.674479  141076 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:40:31.674487  141076 retry.go:31] will retry after 373.234138ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:31.674491  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.18341006s)
	I1009 18:40:31.674527  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:31.674542  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:31.674844  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:31.674863  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:31.674873  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:31.674880  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:31.675134  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:31.675157  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:31.675165  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:31.688596  141076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 18:40:31.688633  141076 node_conditions.go:123] node cpu capacity is 2
	I1009 18:40:31.688652  141076 node_conditions.go:105] duration metric: took 14.165278ms to run NodePressure ...
	I1009 18:40:31.688670  141076 start.go:242] waiting for startup goroutines ...
	I1009 18:40:31.742329  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:31.748779  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:31.961951  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.052463792s)
	I1009 18:40:31.962020  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:31.962038  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:31.962315  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:31.962334  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:31.962344  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:40:31.962347  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:31.962351  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:40:31.962647  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:40:31.962693  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:40:31.962706  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:40:31.963657  141076 addons.go:479] Verifying addon gcp-auth=true in "addons-916037"
	I1009 18:40:31.965903  141076 out.go:179] * Verifying gcp-auth addon...
	I1009 18:40:31.967954  141076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:40:32.020385  141076 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:40:32.020413  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:32.048912  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:32.058244  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:32.232405  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:32.233163  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:32.476074  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:32.576860  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:32.716642  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:32.718305  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:32.974085  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:33.036846  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:33.216628  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:33.221009  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:33.472609  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:33.532731  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:33.715284  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:33.719137  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:33.956497  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.907541643s)
	W1009 18:40:33.956537  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:33.956575  141076 retry.go:31] will retry after 564.005841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:33.978162  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:34.034379  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:34.215170  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:34.215621  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:34.472023  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:34.521229  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:34.533226  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:34.714112  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:34.718498  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:34.971736  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:35.032237  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:35.216485  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:35.218045  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 18:40:35.247828  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:35.247874  141076 retry.go:31] will retry after 665.322748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:35.472616  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:35.530918  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:35.712761  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:35.714919  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:35.914252  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:35.974653  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:36.031103  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:36.213244  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:36.216324  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:36.472504  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:36.532704  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:40:36.617328  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:36.617361  141076 retry.go:31] will retry after 1.118329187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:36.713939  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:36.715713  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:36.972130  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:37.033533  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:37.216897  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:37.220853  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:37.471852  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:37.534743  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:37.720705  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:37.721539  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:37.736680  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:37.972709  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:38.034809  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:38.219592  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:38.219825  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:38.472243  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:38.544315  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:38.720108  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:38.721388  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:38.972506  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:39.032593  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:39.162813  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.42608974s)
	W1009 18:40:39.162870  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:39.162896  141076 retry.go:31] will retry after 2.846019006s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:39.213886  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:39.216586  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:39.473718  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:39.530949  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:39.718132  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:39.719629  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:39.977699  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:40.034574  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:40.216375  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:40.220529  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:40.471635  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:40.535154  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:40.722803  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:40.723026  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:40.973812  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:41.035578  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:41.216892  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:41.220475  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:41.474034  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:41.533940  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:41.713509  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:41.722262  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:41.976216  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:42.009388  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:42.036716  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:42.224759  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:42.225726  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:42.474897  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:42.532690  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:42.715848  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:42.719535  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:42.971785  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:43.035823  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:43.102422  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.092995966s)
	W1009 18:40:43.102480  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:43.102501  141076 retry.go:31] will retry after 2.995108699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:43.215387  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:43.218902  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:43.473778  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:43.920946  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:43.932939  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:43.934533  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:44.032195  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:44.032373  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:44.216984  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:44.217646  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:44.474636  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:44.535818  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:44.714111  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:44.717890  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:44.973644  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:45.032041  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:45.213471  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:45.217613  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:45.472101  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:45.535767  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:45.718589  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:45.723487  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:45.972317  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:46.032065  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:46.098297  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:46.213950  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:46.220015  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:46.473809  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:46.531345  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:46.717385  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:46.719997  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:46.971899  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:47.040455  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:40:47.062361  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:47.062398  141076 retry.go:31] will retry after 5.562029856s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:47.222790  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:47.223260  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:47.472342  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:47.534411  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:47.780277  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:47.817525  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:48.019735  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:48.033090  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:48.214932  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:48.217445  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:48.472047  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:48.535202  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:49.047711  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:49.050015  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:49.050941  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:49.051310  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:49.213908  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:49.218009  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:49.472768  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:49.531805  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:49.714649  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:49.717875  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:49.971457  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:50.032016  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:50.217813  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:50.221593  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:50.474188  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:50.534338  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:50.713825  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:50.717187  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:50.972330  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:51.032378  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:51.219218  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:51.220444  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:51.471350  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:51.533394  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:51.714079  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:51.716223  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:51.972128  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:52.032153  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:52.213862  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:52.216787  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:52.472809  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:52.531255  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:52.625437  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:40:52.712427  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:52.718699  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:52.973177  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:53.033323  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:53.215414  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:53.216588  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 18:40:53.367620  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:53.367661  141076 retry.go:31] will retry after 8.852694702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:40:53.472436  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:53.532869  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:53.716496  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:53.719857  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:53.976439  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:54.035532  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:54.227627  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:54.228954  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:54.472598  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:54.532613  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:54.716379  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:54.717490  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:54.973392  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:55.032439  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:55.215097  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:55.215978  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:55.471645  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:55.531995  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:55.713752  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:55.718405  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:55.971672  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:56.032124  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:56.215917  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:56.218049  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:56.472001  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:56.531593  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:56.714990  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:56.718545  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:56.973488  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:57.034625  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:57.217214  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:57.219033  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:57.473292  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:57.531550  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:57.716418  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:57.717737  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:57.973929  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:58.032178  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:58.213531  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:58.215519  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:58.472744  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:58.533330  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:58.715139  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:58.717442  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:58.973685  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:59.072903  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:59.214091  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:59.216486  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:59.472411  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:40:59.531642  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:40:59.714951  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:40:59.715974  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:40:59.972139  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:00.032950  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:00.213337  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:00.215688  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:00.473619  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:00.531499  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:00.713776  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:00.715743  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:00.972156  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:01.033338  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:01.222289  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:01.222331  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:01.473472  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:01.534008  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:01.721018  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:01.721309  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:01.977454  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:02.034456  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:02.218240  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:02.219675  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:02.220611  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:41:02.473866  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:02.532202  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:02.714631  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:02.717784  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:02.976381  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:03.037858  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:03.378642  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:03.382327  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:03.472000  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:03.532485  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:03.650403  141076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.429747287s)
	W1009 18:41:03.650463  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:41:03.650491  141076 retry.go:31] will retry after 5.991055578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:41:03.717397  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:03.721107  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:03.971970  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:04.031951  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:04.216722  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:04.222525  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:04.471570  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:04.532866  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:04.718370  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:04.724329  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:04.972657  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:05.035838  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:05.221761  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:05.221942  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:05.693314  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:05.697115  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:05.720858  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:05.724429  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:05.975933  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:06.033415  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:06.213690  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:06.215439  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:06.471348  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:06.533266  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:06.715354  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:06.724072  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:06.971548  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:07.033177  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:07.215118  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:07.216869  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:07.473841  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:07.534032  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:07.717004  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:07.721272  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:07.978986  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:08.034619  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:08.219894  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:08.222213  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:08.474418  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:08.536437  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:08.718598  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:08.719380  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:08.972454  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:09.033829  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:09.216614  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:09.220098  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:09.471418  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:09.532030  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:09.642175  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:41:09.713939  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:09.716620  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:10.078961  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:10.079525  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:10.216362  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:10.219548  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:10.474401  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:10.532590  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1009 18:41:10.601482  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:41:10.601521  141076 retry.go:31] will retry after 12.799601382s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:41:10.714041  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:10.716658  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:10.973455  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:11.032295  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:11.216382  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:11.218130  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:11.472594  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:11.531010  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:11.713444  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:11.717974  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:11.971044  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:12.033333  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:12.215804  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:12.216122  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:12.472219  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:12.531666  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:12.714871  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:12.719005  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:12.971404  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:13.038257  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:13.218940  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:13.223114  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:13.472264  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:13.533550  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:13.716414  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:13.717989  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:13.973908  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:14.031894  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:14.218571  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:14.218706  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:14.473097  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:14.531429  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:14.714387  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:14.715933  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:14.972364  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:15.032447  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:15.213660  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:15.220322  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:15.473063  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:15.531520  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:15.713899  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:15.716089  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:15.971327  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:16.031825  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:16.213511  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:16.216188  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:16.471858  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:16.531829  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:16.714438  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:16.716942  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:16.973840  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:17.033596  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:17.217179  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:17.217302  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:17.473082  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:17.535659  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:17.715079  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:17.719680  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:17.972645  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:18.033948  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:18.213731  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:18.216587  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:18.476771  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:18.577585  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:18.713971  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:18.716039  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:18.971229  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:19.031784  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:19.214974  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:41:19.215539  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:19.491470  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:19.533084  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:19.713706  141076 kapi.go:107] duration metric: took 52.003983112s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:41:19.717170  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:19.972042  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:20.031889  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:20.216852  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:20.474437  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:20.534741  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:20.719379  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:20.972163  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:21.032480  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:21.217162  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:21.475183  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:21.576037  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:21.716361  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:21.976853  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:22.033514  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:22.218382  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:22.472850  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:22.536635  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:22.716340  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:22.972162  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:23.032617  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:23.218661  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:23.401264  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:41:23.475351  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:23.531526  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:23.715470  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:23.972249  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:24.031249  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:24.217402  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 18:41:24.342926  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:41:24.342971  141076 retry.go:31] will retry after 32.042035334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:41:24.471455  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:24.532873  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:24.721241  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:24.973406  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:25.033534  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:25.216517  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:25.491776  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:25.588751  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:25.720209  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:25.974949  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:26.033579  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:26.219060  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:26.473088  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:26.535662  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:26.720328  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:26.973681  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:27.033382  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:27.218518  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:27.473288  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:27.532311  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:27.748093  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:27.972496  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:28.032437  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:28.219161  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:28.473506  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:28.537220  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:28.717436  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:28.971905  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:29.032411  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:29.217219  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:29.471301  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:29.531666  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:29.716265  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:29.972064  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:30.031623  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:30.215954  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:30.472443  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:30.535199  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:30.716623  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:30.974020  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:31.031196  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:31.218133  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:31.472470  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:31.536258  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:31.720320  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:31.974191  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:32.031872  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:32.219471  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:32.474502  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:32.534743  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:32.720710  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:32.975452  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:33.034608  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:33.216811  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:33.472870  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:33.532503  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:33.716620  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:33.973896  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:34.035616  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:34.218178  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:34.474053  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:34.532334  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:34.717527  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:34.972041  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:35.034419  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:35.219990  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:35.472048  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:35.533346  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:35.719239  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:35.973569  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:36.036640  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:36.216681  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:36.472400  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:36.573424  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:36.717013  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:36.972732  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:37.033712  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:37.217449  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:37.472471  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:37.532699  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:37.722114  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:37.976438  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:38.033285  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:38.216015  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:38.475188  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:38.533714  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:38.727624  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:38.973109  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:39.031740  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:39.215815  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:39.475453  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:39.575547  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:39.722738  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:39.971989  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:40.033892  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:40.216309  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:40.472588  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:40.535778  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:40.721471  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:40.973531  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:41.031771  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:41.219604  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:41.472152  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:41.532177  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:41.717751  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:41.980015  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:42.039495  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:42.216771  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:42.474984  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:42.534431  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:42.719638  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:42.974648  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:43.032237  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:43.218274  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:43.472712  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:43.532659  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:43.716399  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:43.972639  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:44.074357  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:44.220171  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:44.473364  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:44.534746  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:44.719924  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:44.972096  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:45.032071  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:45.220676  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:45.472486  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:45.532070  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:45.715951  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:45.972629  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:46.030865  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:46.219396  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:46.471303  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:46.532990  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:46.863709  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:46.974544  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:47.032182  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:47.219503  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:47.472843  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:47.531235  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:47.719831  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:47.987248  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:48.086451  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:48.215879  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:48.473308  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:48.532278  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:48.716683  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:48.974289  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:49.033474  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:49.216688  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:49.475731  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:49.534963  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:49.719724  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:49.975633  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:50.033981  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:50.218616  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:50.472171  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:50.531718  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:50.717797  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:50.972719  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:51.034388  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:51.216503  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:51.471980  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:51.538388  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:51.717436  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:51.972536  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:52.032503  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:41:52.217661  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:52.471977  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:52.531626  141076 kapi.go:107] duration metric: took 1m23.004084312s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:41:52.716071  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:52.971705  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:53.217647  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:53.472871  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:53.716821  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:53.972590  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:54.216143  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:54.472011  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:54.716655  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:54.972941  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:55.216525  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:55.472479  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:55.716335  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:55.971435  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:56.216681  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:56.385273  141076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1009 18:41:56.474292  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:56.716729  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:56.972779  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 18:41:57.127978  141076 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:41:57.128078  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:41:57.128096  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:41:57.128391  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:41:57.128412  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:41:57.128445  141076 main.go:141] libmachine: (addons-916037) DBG | Closing plugin on server side
	I1009 18:41:57.128505  141076 main.go:141] libmachine: Making call to close driver server
	I1009 18:41:57.128528  141076 main.go:141] libmachine: (addons-916037) Calling .Close
	I1009 18:41:57.128749  141076 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:41:57.128766  141076 main.go:141] libmachine: Making call to close connection to plugin binary
	W1009 18:41:57.128848  141076 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:41:57.219040  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:57.471367  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:57.716375  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:57.972405  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:58.217217  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:58.473085  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:58.716504  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:58.972976  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:59.216355  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:59.472867  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:41:59.716615  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:41:59.971835  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:00.217533  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:00.472714  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:00.716796  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:00.972776  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:01.216779  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:01.472261  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:01.717388  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:01.971931  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:02.217751  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:02.473815  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:02.717790  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:02.973073  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:03.216979  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:03.473226  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:03.716906  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:03.972056  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:04.216665  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:04.472526  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:04.716747  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:04.973920  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:05.217772  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:05.472747  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:05.717144  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:05.972594  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:06.216928  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:06.471894  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:06.716912  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:06.972254  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:07.217896  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:07.471903  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:07.716969  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:07.971205  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:08.217071  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:08.473068  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:08.716412  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:08.972125  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:09.217686  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:09.472108  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:09.716992  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:09.973525  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:10.218038  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:10.472272  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:10.716776  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:10.972408  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:11.217385  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:11.471639  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:11.716727  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:11.972057  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:12.217170  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:12.472694  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:12.716167  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:12.971423  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:13.217727  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:13.472038  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:13.721711  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:13.972152  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:14.216786  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:14.472443  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:14.716359  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:14.972253  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:15.216913  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:15.471788  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:15.716673  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:15.972215  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:16.217001  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:16.472667  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:16.716585  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:16.972342  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:17.216364  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:17.471695  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:17.716119  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:17.971817  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:18.216070  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:18.472516  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:18.716513  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:18.972666  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:19.217732  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:19.472984  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:19.717200  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:19.971644  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:20.216890  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:20.472343  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:20.715993  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:20.971456  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:21.216927  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:21.472458  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:21.715822  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:21.972436  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:22.216522  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:22.472370  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:22.716037  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:22.971681  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:23.217117  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:23.473924  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:23.716869  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:23.972718  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:24.216407  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:24.471858  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:24.717492  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:24.972226  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:25.217658  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:25.494187  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:25.717989  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:25.971294  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:26.215869  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:26.472369  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:26.716091  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:26.971811  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:27.216679  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:27.472504  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:27.715886  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:27.972258  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:28.216896  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:28.473621  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:28.717255  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:28.971724  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:29.217116  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:29.472041  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:29.716896  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:29.973502  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:30.217046  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:30.472071  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:30.717411  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:30.973384  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:31.217063  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:31.472522  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:31.715786  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:31.972676  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:32.216800  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:32.472964  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:32.717248  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:32.971735  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:33.218543  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:33.472073  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:33.717255  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:33.972243  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:34.218018  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:34.471393  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:34.716328  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:34.971422  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:35.216809  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:35.472227  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:35.717709  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:35.972234  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:36.216074  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:36.473911  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:36.717063  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:36.971435  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:37.216818  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:37.472469  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:37.716677  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:37.972254  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:38.216929  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:38.471453  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:38.716041  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:38.971195  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:39.217921  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:39.470868  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:39.717359  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:39.972310  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:40.217041  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:40.471283  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:40.717423  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:40.971707  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:41.216632  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:41.472320  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:41.717037  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:41.971530  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:42.216823  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:42.472651  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:42.716115  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:42.971721  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:43.217162  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:43.472017  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:43.716990  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:43.971256  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:44.217046  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:44.471579  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:44.715919  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:44.970999  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:45.221752  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:45.476242  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:45.717882  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:45.972983  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:46.216306  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:46.473897  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:46.722169  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:46.971355  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:47.219255  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:47.477997  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:47.721307  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:47.972869  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:48.218417  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:48.474002  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:48.718423  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:48.971664  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:49.218032  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:49.471453  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:49.717231  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:49.973270  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:50.219047  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:50.472177  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:50.717693  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:50.972515  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:51.219018  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:51.473632  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:51.720154  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:51.973963  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:52.220089  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:52.472916  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:52.716464  141076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:42:52.972814  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:53.216840  141076 kapi.go:107] duration metric: took 2m25.504353006s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:42:53.471900  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:53.972293  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:54.509707  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:54.971391  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:55.474989  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:55.973597  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:56.472101  141076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:42:56.975506  141076 kapi.go:107] duration metric: took 2m25.007551811s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:42:56.976744  141076 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-916037 cluster.
	I1009 18:42:56.977795  141076 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:42:56.978964  141076 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:42:56.980035  141076 out.go:179] * Enabled addons: amd-gpu-device-plugin, default-storageclass, nvidia-device-plugin, storage-provisioner, registry-creds, cloud-spanner, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1009 18:42:56.980879  141076 addons.go:514] duration metric: took 2m38.869332156s for enable addons: enabled=[amd-gpu-device-plugin default-storageclass nvidia-device-plugin storage-provisioner registry-creds cloud-spanner metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1009 18:42:56.980931  141076 start.go:247] waiting for cluster config update ...
	I1009 18:42:56.980959  141076 start.go:256] writing updated cluster config ...
	I1009 18:42:56.981269  141076 ssh_runner.go:195] Run: rm -f paused
	I1009 18:42:56.994821  141076 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:42:57.003140  141076 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9qg6w" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.009065  141076 pod_ready.go:94] pod "coredns-66bc5c9577-9qg6w" is "Ready"
	I1009 18:42:57.009087  141076 pod_ready.go:86] duration metric: took 5.923417ms for pod "coredns-66bc5c9577-9qg6w" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.012056  141076 pod_ready.go:83] waiting for pod "etcd-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.020370  141076 pod_ready.go:94] pod "etcd-addons-916037" is "Ready"
	I1009 18:42:57.020397  141076 pod_ready.go:86] duration metric: took 8.32302ms for pod "etcd-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.023016  141076 pod_ready.go:83] waiting for pod "kube-apiserver-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.032528  141076 pod_ready.go:94] pod "kube-apiserver-addons-916037" is "Ready"
	I1009 18:42:57.032550  141076 pod_ready.go:86] duration metric: took 9.509734ms for pod "kube-apiserver-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.035477  141076 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.399841  141076 pod_ready.go:94] pod "kube-controller-manager-addons-916037" is "Ready"
	I1009 18:42:57.399866  141076 pod_ready.go:86] duration metric: took 364.371009ms for pod "kube-controller-manager-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.599605  141076 pod_ready.go:83] waiting for pod "kube-proxy-nfbpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:57.999613  141076 pod_ready.go:94] pod "kube-proxy-nfbpj" is "Ready"
	I1009 18:42:57.999639  141076 pod_ready.go:86] duration metric: took 400.009014ms for pod "kube-proxy-nfbpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:58.199421  141076 pod_ready.go:83] waiting for pod "kube-scheduler-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:58.601026  141076 pod_ready.go:94] pod "kube-scheduler-addons-916037" is "Ready"
	I1009 18:42:58.601062  141076 pod_ready.go:86] duration metric: took 401.614419ms for pod "kube-scheduler-addons-916037" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 18:42:58.601078  141076 pod_ready.go:40] duration metric: took 1.606224913s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 18:42:58.648181  141076 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1009 18:42:58.649915  141076 out.go:179] * Done! kubectl is now configured to use "addons-916037" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.157109542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035556157079915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a56c6b15-4f28-4f51-b136-4a5acf1b785a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.157806079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31cca8f0-bd13-4844-b066-16f2668c6824 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.158121710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31cca8f0-bd13-4844-b066-16f2668c6824 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.158791572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:daea485215ac3fa30f45ba8981eedfbdc96e46c904b6b717076c932d8f4c00e6,PodSandboxId:775da1ae7f7033726fda9b633f1a01c907a0ea508d6c7c625a79622cb62977f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760035412891917431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7461112d-e3eb-4015-adf9-246e185bff35,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f1227841d9840badac38d48138ca68283681a559944db976e9ba25635961944,PodSandboxId:2db16b8e37c8f899019db9aab595e7c71b47dab392afb9dd86db0a27cf56e2aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760035383190341220,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5a2901-453c-4cfa-8395-271b98194991,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128f66b7c9ed5d7f1a51a90f1b4868f979da92ed617ba8055db30639b89352c2,PodSandboxId:dcc3495078bea62fcbaa3d2fcf5df1b659eb1aa65d1e68022fb1d4248edd6bdd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760035372735767209,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-j2rkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c210bc9b-e01f-4720-8e08-045e51173ad1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0446cb429f2efe0c7b07c3e29cceadf309c7c9dfdae999acc644a511bf49c32,PodSandboxId:911e4d952556b31b5b019508bfb62a7429a1806bc226efc6979c96893ef8462b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035298661192608,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rfmnn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f426c993-7de0-47fd-879b-684371071543,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478bb55ae69a0cdeef3309101edd73b869dd3dfedb7b5337e1608ac53b145da5,PodSandboxId:c08ddd31da0a2ca9ac068fbea68a721236f0abd39c2b036472ec925fbeb60694,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035296249934674,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lp67s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de459587-91f1-4212-b135-0697023708d8,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea9c54e567520e0642d3af1c5d0776a1549320b0c14129349115d6b4cf857,PodSandboxId:bd40a8c6ff759c3c72b02654aae9f9624239e0b68b9888be86c67b135094a4db,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760035288551147008,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rhngd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0bd96665-3b91-4342-96ff-226330707e9c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc9a35fcb9611bee264c376da7e9dd5f1cd8c13f1b9d4aa56ca1344efb09fcf,PodSandboxId:6355c53359eef5abc27300a578c4eff16320db0a96556d57f36128a17686a72d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760035270222498432,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8644093-2bd9-4e37-ba1a-ac4506191dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ede35603b432ec80bf6aade41dcb81d54d909ca399c38de1737281de63561b,PodSandboxId:6aadc648dc4f28dacb003c3f3c4725440c78bc040b91d1e
7cbd16a97b877d273,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760035233531591464,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-67vlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3df5d7ee-6455-465f-ad7b-b2a61ede0c07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f21605afb4018635092483bd4710ea61635abe5d2f0653f60fd346a16579a7,PodSandboxId:eff9286
2f72fd946c6d31ca7d694ae03ec4a00a0c98db8c138288b3d543d1281,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035231511633417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2884fb-ec42-4ca4-954f-7c3e20f304a7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0f77d46e66277f37724b69f310995e948e40c422827bdb683fd2cbdcf0c8f5,PodSandboxId:beb507e44e5e2947f68
41ac5222cb91ec371ed7bcf1b36a2d58e8280f758dfcb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760035220216261009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9qg6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca638cc-b670-4c22-881b-96987cd0ee8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a51d36cb84fae54c73c9a3c77f36e88c49d47f2b59fb81b94a28326dc982983,PodSandboxId:f308c398aec32794a182d138f6528ee56b886845c5e15aad03c1d51d1f6057f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760035219005282704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfbpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c65154-0654-4cc6-9e3b-5c92042bd666,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4ef68e06ff2e70491f5c42d53b36ac90ecf51c91dddc407ede20a0191d23a8,PodSandboxId:405c44a6ee014e252bfba5a29a5840ed4799558a2ef7708b81974b360e7e52ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760035207583268250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe40f5b8fcf2c82141d3b65d7a41cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7678e71160fe8ae8264529f0fc189eefaec92993af6bdf570033a1a8f44856a9,PodSandboxId:44e8ae19fe7ddda6d0e66b00f0043d6c8c4b2c0f1f9d385864fd7ca0529f94a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760035207633265112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09b78943e1388cc7e9c860565978244d
,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb563bed9637bc802e51071435813dece8011eebc96a4dc1f877756e49a3b4f,PodSandboxId:b632f121344b7e7ec12d3225d538d95d7f46dd61001e3ad0f6d6ec9bd4fb2186,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760035207574416401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons
-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b928894d25c42d08dfd4353cacc691,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20d7640d4c04da648fb04ca5a8b60deb8aa5e7676aefb4aacf011ad44567482,PodSandboxId:f8d87e24f03947fefb1421622aec04b0cf4606471056220e81c0e8cd35556ebd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760035207564142104,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f18222b578b6a5ff3ad2d5e140bcdc5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31cca8f0-bd13-4844-b066-16f2668c6824 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.201071700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7afa2134-1e8f-47db-9da2-bf7902c61245 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.201206108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7afa2134-1e8f-47db-9da2-bf7902c61245 name=/runtime.v1.RuntimeService/Version
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.203193336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a50f884d-a4bf-41c8-8de2-5e59839330b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.204486068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035556204460631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a50f884d-a4bf-41c8-8de2-5e59839330b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.205233939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd31f30e-e418-47e6-ad0b-534edfd94059 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.205293316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd31f30e-e418-47e6-ad0b-534edfd94059 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.205604262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:daea485215ac3fa30f45ba8981eedfbdc96e46c904b6b717076c932d8f4c00e6,PodSandboxId:775da1ae7f7033726fda9b633f1a01c907a0ea508d6c7c625a79622cb62977f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760035412891917431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7461112d-e3eb-4015-adf9-246e185bff35,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f1227841d9840badac38d48138ca68283681a559944db976e9ba25635961944,PodSandboxId:2db16b8e37c8f899019db9aab595e7c71b47dab392afb9dd86db0a27cf56e2aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760035383190341220,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5a2901-453c-4cfa-8395-271b98194991,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128f66b7c9ed5d7f1a51a90f1b4868f979da92ed617ba8055db30639b89352c2,PodSandboxId:dcc3495078bea62fcbaa3d2fcf5df1b659eb1aa65d1e68022fb1d4248edd6bdd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760035372735767209,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-j2rkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c210bc9b-e01f-4720-8e08-045e51173ad1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0446cb429f2efe0c7b07c3e29cceadf309c7c9dfdae999acc644a511bf49c32,PodSandboxId:911e4d952556b31b5b019508bfb62a7429a1806bc226efc6979c96893ef8462b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035298661192608,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rfmnn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f426c993-7de0-47fd-879b-684371071543,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478bb55ae69a0cdeef3309101edd73b869dd3dfedb7b5337e1608ac53b145da5,PodSandboxId:c08ddd31da0a2ca9ac068fbea68a721236f0abd39c2b036472ec925fbeb60694,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035296249934674,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lp67s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de459587-91f1-4212-b135-0697023708d8,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea9c54e567520e0642d3af1c5d0776a1549320b0c14129349115d6b4cf857,PodSandboxId:bd40a8c6ff759c3c72b02654aae9f9624239e0b68b9888be86c67b135094a4db,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760035288551147008,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rhngd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0bd96665-3b91-4342-96ff-226330707e9c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc9a35fcb9611bee264c376da7e9dd5f1cd8c13f1b9d4aa56ca1344efb09fcf,PodSandboxId:6355c53359eef5abc27300a578c4eff16320db0a96556d57f36128a17686a72d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760035270222498432,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8644093-2bd9-4e37-ba1a-ac4506191dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ede35603b432ec80bf6aade41dcb81d54d909ca399c38de1737281de63561b,PodSandboxId:6aadc648dc4f28dacb003c3f3c4725440c78bc040b91d1e
7cbd16a97b877d273,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760035233531591464,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-67vlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3df5d7ee-6455-465f-ad7b-b2a61ede0c07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f21605afb4018635092483bd4710ea61635abe5d2f0653f60fd346a16579a7,PodSandboxId:eff9286
2f72fd946c6d31ca7d694ae03ec4a00a0c98db8c138288b3d543d1281,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035231511633417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2884fb-ec42-4ca4-954f-7c3e20f304a7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0f77d46e66277f37724b69f310995e948e40c422827bdb683fd2cbdcf0c8f5,PodSandboxId:beb507e44e5e2947f68
41ac5222cb91ec371ed7bcf1b36a2d58e8280f758dfcb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760035220216261009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9qg6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca638cc-b670-4c22-881b-96987cd0ee8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a51d36cb84fae54c73c9a3c77f36e88c49d47f2b59fb81b94a28326dc982983,PodSandboxId:f308c398aec32794a182d138f6528ee56b886845c5e15aad03c1d51d1f6057f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760035219005282704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfbpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c65154-0654-4cc6-9e3b-5c92042bd666,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4ef68e06ff2e70491f5c42d53b36ac90ecf51c91dddc407ede20a0191d23a8,PodSandboxId:405c44a6ee014e252bfba5a29a5840ed4799558a2ef7708b81974b360e7e52ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760035207583268250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe40f5b8fcf2c82141d3b65d7a41cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7678e71160fe8ae8264529f0fc189eefaec92993af6bdf570033a1a8f44856a9,PodSandboxId:44e8ae19fe7ddda6d0e66b00f0043d6c8c4b2c0f1f9d385864fd7ca0529f94a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760035207633265112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09b78943e1388cc7e9c860565978244d
,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb563bed9637bc802e51071435813dece8011eebc96a4dc1f877756e49a3b4f,PodSandboxId:b632f121344b7e7ec12d3225d538d95d7f46dd61001e3ad0f6d6ec9bd4fb2186,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760035207574416401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons
-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b928894d25c42d08dfd4353cacc691,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20d7640d4c04da648fb04ca5a8b60deb8aa5e7676aefb4aacf011ad44567482,PodSandboxId:f8d87e24f03947fefb1421622aec04b0cf4606471056220e81c0e8cd35556ebd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760035207564142104,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f18222b578b6a5ff3ad2d5e140bcdc5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd31f30e-e418-47e6-ad0b-534edfd94059 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.246519809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7be4b979-1a40-4a95-847c-8310bf410bbe name=/runtime.v1.RuntimeService/Version
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.246619086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7be4b979-1a40-4a95-847c-8310bf410bbe name=/runtime.v1.RuntimeService/Version
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.248468454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f28bc44d-ffb4-41ad-9a35-61b9f0769954 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.249720597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035556249693395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f28bc44d-ffb4-41ad-9a35-61b9f0769954 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.250388502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b6cde8e-340f-4fcb-844f-52de0989fb5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.250489713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b6cde8e-340f-4fcb-844f-52de0989fb5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.251551012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:daea485215ac3fa30f45ba8981eedfbdc96e46c904b6b717076c932d8f4c00e6,PodSandboxId:775da1ae7f7033726fda9b633f1a01c907a0ea508d6c7c625a79622cb62977f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760035412891917431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7461112d-e3eb-4015-adf9-246e185bff35,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f1227841d9840badac38d48138ca68283681a559944db976e9ba25635961944,PodSandboxId:2db16b8e37c8f899019db9aab595e7c71b47dab392afb9dd86db0a27cf56e2aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760035383190341220,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5a2901-453c-4cfa-8395-271b98194991,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128f66b7c9ed5d7f1a51a90f1b4868f979da92ed617ba8055db30639b89352c2,PodSandboxId:dcc3495078bea62fcbaa3d2fcf5df1b659eb1aa65d1e68022fb1d4248edd6bdd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760035372735767209,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-j2rkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c210bc9b-e01f-4720-8e08-045e51173ad1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0446cb429f2efe0c7b07c3e29cceadf309c7c9dfdae999acc644a511bf49c32,PodSandboxId:911e4d952556b31b5b019508bfb62a7429a1806bc226efc6979c96893ef8462b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035298661192608,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rfmnn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f426c993-7de0-47fd-879b-684371071543,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478bb55ae69a0cdeef3309101edd73b869dd3dfedb7b5337e1608ac53b145da5,PodSandboxId:c08ddd31da0a2ca9ac068fbea68a721236f0abd39c2b036472ec925fbeb60694,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035296249934674,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lp67s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de459587-91f1-4212-b135-0697023708d8,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea9c54e567520e0642d3af1c5d0776a1549320b0c14129349115d6b4cf857,PodSandboxId:bd40a8c6ff759c3c72b02654aae9f9624239e0b68b9888be86c67b135094a4db,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760035288551147008,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rhngd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0bd96665-3b91-4342-96ff-226330707e9c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc9a35fcb9611bee264c376da7e9dd5f1cd8c13f1b9d4aa56ca1344efb09fcf,PodSandboxId:6355c53359eef5abc27300a578c4eff16320db0a96556d57f36128a17686a72d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760035270222498432,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8644093-2bd9-4e37-ba1a-ac4506191dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ede35603b432ec80bf6aade41dcb81d54d909ca399c38de1737281de63561b,PodSandboxId:6aadc648dc4f28dacb003c3f3c4725440c78bc040b91d1e
7cbd16a97b877d273,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760035233531591464,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-67vlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3df5d7ee-6455-465f-ad7b-b2a61ede0c07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f21605afb4018635092483bd4710ea61635abe5d2f0653f60fd346a16579a7,PodSandboxId:eff9286
2f72fd946c6d31ca7d694ae03ec4a00a0c98db8c138288b3d543d1281,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035231511633417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2884fb-ec42-4ca4-954f-7c3e20f304a7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0f77d46e66277f37724b69f310995e948e40c422827bdb683fd2cbdcf0c8f5,PodSandboxId:beb507e44e5e2947f68
41ac5222cb91ec371ed7bcf1b36a2d58e8280f758dfcb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760035220216261009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9qg6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca638cc-b670-4c22-881b-96987cd0ee8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a51d36cb84fae54c73c9a3c77f36e88c49d47f2b59fb81b94a28326dc982983,PodSandboxId:f308c398aec32794a182d138f6528ee56b886845c5e15aad03c1d51d1f6057f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760035219005282704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfbpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c65154-0654-4cc6-9e3b-5c92042bd666,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4ef68e06ff2e70491f5c42d53b36ac90ecf51c91dddc407ede20a0191d23a8,PodSandboxId:405c44a6ee014e252bfba5a29a5840ed4799558a2ef7708b81974b360e7e52ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760035207583268250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe40f5b8fcf2c82141d3b65d7a41cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7678e71160fe8ae8264529f0fc189eefaec92993af6bdf570033a1a8f44856a9,PodSandboxId:44e8ae19fe7ddda6d0e66b00f0043d6c8c4b2c0f1f9d385864fd7ca0529f94a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760035207633265112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09b78943e1388cc7e9c860565978244d
,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb563bed9637bc802e51071435813dece8011eebc96a4dc1f877756e49a3b4f,PodSandboxId:b632f121344b7e7ec12d3225d538d95d7f46dd61001e3ad0f6d6ec9bd4fb2186,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760035207574416401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons
-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b928894d25c42d08dfd4353cacc691,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20d7640d4c04da648fb04ca5a8b60deb8aa5e7676aefb4aacf011ad44567482,PodSandboxId:f8d87e24f03947fefb1421622aec04b0cf4606471056220e81c0e8cd35556ebd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760035207564142104,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f18222b578b6a5ff3ad2d5e140bcdc5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b6cde8e-340f-4fcb-844f-52de0989fb5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.291415605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c360039-3396-473a-aef6-44e74d48232d name=/runtime.v1.RuntimeService/Version
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.291502498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c360039-3396-473a-aef6-44e74d48232d name=/runtime.v1.RuntimeService/Version
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.293202181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=118f8fd4-f98c-46f9-ac17-b237d8691b94 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.295228648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760035556295122015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598010,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=118f8fd4-f98c-46f9-ac17-b237d8691b94 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.296078399Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6022730-2f67-487b-bb7d-9b814ef07ec0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.296248043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6022730-2f67-487b-bb7d-9b814ef07ec0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 18:45:56 addons-916037 crio[815]: time="2025-10-09 18:45:56.296888839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:daea485215ac3fa30f45ba8981eedfbdc96e46c904b6b717076c932d8f4c00e6,PodSandboxId:775da1ae7f7033726fda9b633f1a01c907a0ea508d6c7c625a79622cb62977f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760035412891917431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7461112d-e3eb-4015-adf9-246e185bff35,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f1227841d9840badac38d48138ca68283681a559944db976e9ba25635961944,PodSandboxId:2db16b8e37c8f899019db9aab595e7c71b47dab392afb9dd86db0a27cf56e2aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760035383190341220,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb5a2901-453c-4cfa-8395-271b98194991,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:128f66b7c9ed5d7f1a51a90f1b4868f979da92ed617ba8055db30639b89352c2,PodSandboxId:dcc3495078bea62fcbaa3d2fcf5df1b659eb1aa65d1e68022fb1d4248edd6bdd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1760035372735767209,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-j2rkt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c210bc9b-e01f-4720-8e08-045e51173ad1,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b0446cb429f2efe0c7b07c3e29cceadf309c7c9dfdae999acc644a511bf49c32,PodSandboxId:911e4d952556b31b5b019508bfb62a7429a1806bc226efc6979c96893ef8462b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035298661192608,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rfmnn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f426c993-7de0-47fd-879b-684371071543,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478bb55ae69a0cdeef3309101edd73b869dd3dfedb7b5337e1608ac53b145da5,PodSandboxId:c08ddd31da0a2ca9ac068fbea68a721236f0abd39c2b036472ec925fbeb60694,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1760035296249934674,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lp67s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de459587-91f1-4212-b135-0697023708d8,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9faea9c54e567520e0642d3af1c5d0776a1549320b0c14129349115d6b4cf857,PodSandboxId:bd40a8c6ff759c3c72b02654aae9f9624239e0b68b9888be86c67b135094a4db,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760035288551147008,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-rhngd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 0bd96665-3b91-4342-96ff-226330707e9c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc9a35fcb9611bee264c376da7e9dd5f1cd8c13f1b9d4aa56ca1344efb09fcf,PodSandboxId:6355c53359eef5abc27300a578c4eff16320db0a96556d57f36128a17686a72d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760035270222498432,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8644093-2bd9-4e37-ba1a-ac4506191dfb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ede35603b432ec80bf6aade41dcb81d54d909ca399c38de1737281de63561b,PodSandboxId:6aadc648dc4f28dacb003c3f3c4725440c78bc040b91d1e
7cbd16a97b877d273,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760035233531591464,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-67vlm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3df5d7ee-6455-465f-ad7b-b2a61ede0c07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f21605afb4018635092483bd4710ea61635abe5d2f0653f60fd346a16579a7,PodSandboxId:eff9286
2f72fd946c6d31ca7d694ae03ec4a00a0c98db8c138288b3d543d1281,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760035231511633417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2884fb-ec42-4ca4-954f-7c3e20f304a7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0f77d46e66277f37724b69f310995e948e40c422827bdb683fd2cbdcf0c8f5,PodSandboxId:beb507e44e5e2947f68
41ac5222cb91ec371ed7bcf1b36a2d58e8280f758dfcb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760035220216261009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9qg6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca638cc-b670-4c22-881b-96987cd0ee8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a51d36cb84fae54c73c9a3c77f36e88c49d47f2b59fb81b94a28326dc982983,PodSandboxId:f308c398aec32794a182d138f6528ee56b886845c5e15aad03c1d51d1f6057f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760035219005282704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nfbpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2c65154-0654-4cc6-9e3b-5c92042bd666,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4ef68e06ff2e70491f5c42d53b36ac90ecf51c91dddc407ede20a0191d23a8,PodSandboxId:405c44a6ee014e252bfba5a29a5840ed4799558a2ef7708b81974b360e7e52ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760035207583268250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fe40f5b8fcf2c82141d3b65d7a41cc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.k
ubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7678e71160fe8ae8264529f0fc189eefaec92993af6bdf570033a1a8f44856a9,PodSandboxId:44e8ae19fe7ddda6d0e66b00f0043d6c8c4b2c0f1f9d385864fd7ca0529f94a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760035207633265112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09b78943e1388cc7e9c860565978244d
,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb563bed9637bc802e51071435813dece8011eebc96a4dc1f877756e49a3b4f,PodSandboxId:b632f121344b7e7ec12d3225d538d95d7f46dd61001e3ad0f6d6ec9bd4fb2186,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760035207574416401,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons
-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31b928894d25c42d08dfd4353cacc691,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a20d7640d4c04da648fb04ca5a8b60deb8aa5e7676aefb4aacf011ad44567482,PodSandboxId:f8d87e24f03947fefb1421622aec04b0cf4606471056220e81c0e8cd35556ebd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760035207564142104,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-916037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f18222b578b6a5ff3ad2d5e140bcdc5,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6022730-2f67-487b-bb7d-9b814ef07ec0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	daea485215ac3       docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e                              2 minutes ago       Running             nginx                     0                   775da1ae7f703       nginx
	9f1227841d984       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   2db16b8e37c8f       busybox
	128f66b7c9ed5       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   dcc3495078bea       ingress-nginx-controller-9cc49f96f-j2rkt
	b0446cb429f2e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              patch                     0                   911e4d952556b       ingress-nginx-admission-patch-rfmnn
	478bb55ae69a0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   c08ddd31da0a2       ingress-nginx-admission-create-lp67s
	9faea9c54e567       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   bd40a8c6ff759       gadget-rhngd
	2cc9a35fcb961       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   6355c53359eef       kube-ingress-dns-minikube
	22ede35603b43       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   6aadc648dc4f2       amd-gpu-device-plugin-67vlm
	37f21605afb40       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   eff92862f72fd       storage-provisioner
	0d0f77d46e662       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   beb507e44e5e2       coredns-66bc5c9577-9qg6w
	8a51d36cb84fa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   f308c398aec32       kube-proxy-nfbpj
	7678e71160fe8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   44e8ae19fe7dd       etcd-addons-916037
	5a4ef68e06ff2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   405c44a6ee014       kube-controller-manager-addons-916037
	2eb563bed9637       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   b632f121344b7       kube-apiserver-addons-916037
	a20d7640d4c04       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   f8d87e24f0394       kube-scheduler-addons-916037
	
	
	==> coredns [0d0f77d46e66277f37724b69f310995e948e40c422827bdb683fd2cbdcf0c8f5] <==
	[INFO] 10.244.0.8:49133 - 40029 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000889126s
	[INFO] 10.244.0.8:49133 - 55506 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00024211s
	[INFO] 10.244.0.8:49133 - 26801 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000176809s
	[INFO] 10.244.0.8:49133 - 9318 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000068921s
	[INFO] 10.244.0.8:49133 - 34845 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000109984s
	[INFO] 10.244.0.8:49133 - 46970 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000160856s
	[INFO] 10.244.0.8:49133 - 21260 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000356046s
	[INFO] 10.244.0.8:41949 - 20407 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091465s
	[INFO] 10.244.0.8:41949 - 20730 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112665s
	[INFO] 10.244.0.8:38100 - 16677 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000299103s
	[INFO] 10.244.0.8:38100 - 16900 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059505s
	[INFO] 10.244.0.8:40594 - 14372 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067691s
	[INFO] 10.244.0.8:40594 - 14628 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000154918s
	[INFO] 10.244.0.8:45421 - 14137 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129456s
	[INFO] 10.244.0.8:45421 - 14360 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103321s
	[INFO] 10.244.0.23:56865 - 60071 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000554221s
	[INFO] 10.244.0.23:49435 - 40382 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000301357s
	[INFO] 10.244.0.23:35724 - 37966 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141387s
	[INFO] 10.244.0.23:53213 - 44080 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084627s
	[INFO] 10.244.0.23:42502 - 16588 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000226536s
	[INFO] 10.244.0.23:40079 - 47320 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164764s
	[INFO] 10.244.0.23:37940 - 43634 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001275884s
	[INFO] 10.244.0.23:57886 - 26686 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004719657s
	[INFO] 10.244.0.27:35597 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000837679s
	[INFO] 10.244.0.27:33673 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092935s
	
	
	==> describe nodes <==
	Name:               addons-916037
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-916037
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=addons-916037
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T18_40_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-916037
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 18:40:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-916037
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 18:45:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 18:44:18 +0000   Thu, 09 Oct 2025 18:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 18:44:18 +0000   Thu, 09 Oct 2025 18:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 18:44:18 +0000   Thu, 09 Oct 2025 18:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 18:44:18 +0000   Thu, 09 Oct 2025 18:40:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.158
	  Hostname:    addons-916037
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 4607c46f118942148edd608677ba036a
	  System UUID:                4607c46f-1189-4214-8edd-608677ba036a
	  Boot ID:                    dff6f14b-491d-4fc3-8cf4-341af2c196ef
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  default                     hello-world-app-5d498dc89-kd4wc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gadget                      gadget-rhngd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-j2rkt    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m29s
	  kube-system                 amd-gpu-device-plugin-67vlm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 coredns-66bc5c9577-9qg6w                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m38s
	  kube-system                 etcd-addons-916037                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m43s
	  kube-system                 kube-apiserver-addons-916037                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-controller-manager-addons-916037       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-nfbpj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-scheduler-addons-916037                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m36s  kube-proxy       
	  Normal  Starting                 5m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m43s  kubelet          Node addons-916037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s  kubelet          Node addons-916037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s  kubelet          Node addons-916037 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m42s  kubelet          Node addons-916037 status is now: NodeReady
	  Normal  RegisteredNode           5m39s  node-controller  Node addons-916037 event: Registered Node addons-916037 in Controller
	
	
	==> dmesg <==
	[  +7.218704] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 9 18:41] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.143755] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.085067] kauditd_printk_skb: 35 callbacks suppressed
	[  +3.062562] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.000505] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.362102] kauditd_printk_skb: 76 callbacks suppressed
	[  +4.365367] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.379655] kauditd_printk_skb: 96 callbacks suppressed
	[Oct 9 18:42] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.369635] kauditd_printk_skb: 53 callbacks suppressed
	[Oct 9 18:43] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.520980] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.827391] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.291441] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.328747] kauditd_printk_skb: 141 callbacks suppressed
	[  +1.009871] kauditd_printk_skb: 112 callbacks suppressed
	[  +0.000051] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.643363] kauditd_printk_skb: 91 callbacks suppressed
	[  +4.795481] kauditd_printk_skb: 63 callbacks suppressed
	[Oct 9 18:44] kauditd_printk_skb: 162 callbacks suppressed
	[  +0.521688] kauditd_printk_skb: 10 callbacks suppressed
	[ +19.499853] kauditd_printk_skb: 109 callbacks suppressed
	[Oct 9 18:45] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [7678e71160fe8ae8264529f0fc189eefaec92993af6bdf570033a1a8f44856a9] <==
	{"level":"warn","ts":"2025-10-09T18:41:32.912573Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.367143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-09T18:41:32.915223Z","caller":"traceutil/trace.go:172","msg":"trace[1053090856] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1012; }","duration":"103.026546ms","start":"2025-10-09T18:41:32.812185Z","end":"2025-10-09T18:41:32.915212Z","steps":["trace[1053090856] 'agreement among raft nodes before linearized reading'  (duration: 99.953914ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:41:46.853270Z","caller":"traceutil/trace.go:172","msg":"trace[840823790] linearizableReadLoop","detail":"{readStateIndex:1130; appliedIndex:1130; }","duration":"225.739842ms","start":"2025-10-09T18:41:46.627511Z","end":"2025-10-09T18:41:46.853251Z","steps":["trace[840823790] 'read index received'  (duration: 225.734129ms)","trace[840823790] 'applied index is now lower than readState.Index'  (duration: 4.856µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-09T18:41:46.853554Z","caller":"traceutil/trace.go:172","msg":"trace[974347079] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"304.45019ms","start":"2025-10-09T18:41:46.549091Z","end":"2025-10-09T18:41:46.853541Z","steps":["trace[974347079] 'process raft request'  (duration: 304.348984ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:41:46.853833Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.334371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-09T18:41:46.854462Z","caller":"traceutil/trace.go:172","msg":"trace[1787253314] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:1096; }","duration":"226.972727ms","start":"2025-10-09T18:41:46.627476Z","end":"2025-10-09T18:41:46.854449Z","steps":["trace[1787253314] 'agreement among raft nodes before linearized reading'  (duration: 226.289536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:41:46.854211Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-09T18:41:46.549076Z","time spent":"305.003296ms","remote":"127.0.0.1:39616","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1087 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-09T18:41:46.854758Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.085306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-09T18:41:46.854780Z","caller":"traceutil/trace.go:172","msg":"trace[319541544] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1096; }","duration":"206.110219ms","start":"2025-10-09T18:41:46.648664Z","end":"2025-10-09T18:41:46.854774Z","steps":["trace[319541544] 'agreement among raft nodes before linearized reading'  (duration: 206.06566ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:41:46.855085Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.400823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-09T18:41:46.855126Z","caller":"traceutil/trace.go:172","msg":"trace[718459106] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1096; }","duration":"143.442017ms","start":"2025-10-09T18:41:46.711676Z","end":"2025-10-09T18:41:46.855118Z","steps":["trace[718459106] 'agreement among raft nodes before linearized reading'  (duration: 143.387202ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:42:20.370603Z","caller":"traceutil/trace.go:172","msg":"trace[1403075112] transaction","detail":"{read_only:false; response_revision:1187; number_of_response:1; }","duration":"101.249685ms","start":"2025-10-09T18:42:20.269334Z","end":"2025-10-09T18:42:20.370584Z","steps":["trace[1403075112] 'process raft request'  (duration: 101.127403ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:42:25.487396Z","caller":"traceutil/trace.go:172","msg":"trace[2077724178] transaction","detail":"{read_only:false; response_revision:1194; number_of_response:1; }","duration":"104.521881ms","start":"2025-10-09T18:42:25.382861Z","end":"2025-10-09T18:42:25.487382Z","steps":["trace[2077724178] 'process raft request'  (duration: 104.413601ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:43:02.051206Z","caller":"traceutil/trace.go:172","msg":"trace[1197816377] transaction","detail":"{read_only:false; response_revision:1282; number_of_response:1; }","duration":"143.958887ms","start":"2025-10-09T18:43:01.907228Z","end":"2025-10-09T18:43:02.051187Z","steps":["trace[1197816377] 'process raft request'  (duration: 143.837572ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:43:25.331394Z","caller":"traceutil/trace.go:172","msg":"trace[1316750336] transaction","detail":"{read_only:false; response_revision:1405; number_of_response:1; }","duration":"144.108366ms","start":"2025-10-09T18:43:25.187261Z","end":"2025-10-09T18:43:25.331370Z","steps":["trace[1316750336] 'process raft request'  (duration: 143.990461ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:43:26.480701Z","caller":"traceutil/trace.go:172","msg":"trace[236722626] linearizableReadLoop","detail":"{readStateIndex:1471; appliedIndex:1471; }","duration":"294.16662ms","start":"2025-10-09T18:43:26.186518Z","end":"2025-10-09T18:43:26.480684Z","steps":["trace[236722626] 'read index received'  (duration: 294.160233ms)","trace[236722626] 'applied index is now lower than readState.Index'  (duration: 4.74µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-09T18:43:26.484143Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"297.602084ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-09T18:43:26.484218Z","caller":"traceutil/trace.go:172","msg":"trace[1177904678] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1413; }","duration":"297.719008ms","start":"2025-10-09T18:43:26.186491Z","end":"2025-10-09T18:43:26.484210Z","steps":["trace[1177904678] 'agreement among raft nodes before linearized reading'  (duration: 294.273275ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:43:26.484532Z","caller":"traceutil/trace.go:172","msg":"trace[1738959765] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1414; }","duration":"334.14977ms","start":"2025-10-09T18:43:26.150374Z","end":"2025-10-09T18:43:26.484524Z","steps":["trace[1738959765] 'process raft request'  (duration: 330.489798ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:43:26.484797Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-09T18:43:26.150351Z","time spent":"334.203695ms","remote":"127.0.0.1:40162","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":52,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/metrics-server\" mod_revision:997 > success:<request_delete_range:<key:\"/registry/deployments/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/deployments/kube-system/metrics-server\" > >"}
	{"level":"info","ts":"2025-10-09T18:43:26.486383Z","caller":"traceutil/trace.go:172","msg":"trace[244923895] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"313.197321ms","start":"2025-10-09T18:43:26.173176Z","end":"2025-10-09T18:43:26.486373Z","steps":["trace[244923895] 'process raft request'  (duration: 311.197351ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-09T18:43:26.486539Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-09T18:43:26.173156Z","time spent":"313.353037ms","remote":"127.0.0.1:39582","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":454,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/hpvc\" mod_revision:0 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/hpvc\" value_size:401 >> failure:<>"}
	{"level":"info","ts":"2025-10-09T18:43:31.747624Z","caller":"traceutil/trace.go:172","msg":"trace[1202669372] transaction","detail":"{read_only:false; response_revision:1489; number_of_response:1; }","duration":"143.928897ms","start":"2025-10-09T18:43:31.603681Z","end":"2025-10-09T18:43:31.747610Z","steps":["trace[1202669372] 'process raft request'  (duration: 143.894004ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-09T18:43:31.750262Z","caller":"traceutil/trace.go:172","msg":"trace[1406085649] transaction","detail":"{read_only:false; response_revision:1488; number_of_response:1; }","duration":"149.733759ms","start":"2025-10-09T18:43:31.598827Z","end":"2025-10-09T18:43:31.748561Z","steps":["trace[1406085649] 'process raft request'  (duration: 53.864803ms)","trace[1406085649] 'compare'  (duration: 94.768451ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-09T18:43:41.202014Z","caller":"traceutil/trace.go:172","msg":"trace[613660048] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"222.207587ms","start":"2025-10-09T18:43:40.979794Z","end":"2025-10-09T18:43:41.202001Z","steps":["trace[613660048] 'process raft request'  (duration: 222.079078ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:45:56 up 6 min,  0 users,  load average: 0.94, 1.84, 1.00
	Linux addons-916037 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2eb563bed9637bc802e51071435813dece8011eebc96a4dc1f877756e49a3b4f] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 18:41:30.555791       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1009 18:43:10.462236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:55420: use of closed network connection
	E1009 18:43:10.655538       1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:55454: use of closed network connection
	I1009 18:43:19.906526       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.223.122"}
	I1009 18:43:27.223541       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 18:43:27.445990       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.129.168"}
	I1009 18:43:31.572626       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1009 18:43:49.774654       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 18:44:08.532742       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:44:08.532888       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:44:08.568581       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:44:08.568639       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:44:08.591756       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:44:08.591814       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:44:08.630227       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:44:08.630332       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:44:08.663873       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:44:08.663926       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 18:44:09.631130       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 18:44:09.667126       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 18:44:09.795043       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1009 18:44:14.868538       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 18:45:54.994920       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.176.59"}
	
	
	==> kube-controller-manager [5a4ef68e06ff2e70491f5c42d53b36ac90ecf51c91dddc407ede20a0191d23a8] <==
	I1009 18:44:17.401785       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1009 18:44:18.558158       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:18.559714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:19.370447       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:19.371565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:20.125684       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:20.127278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:25.138142       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:25.140333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:27.074920       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:27.076444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:30.885224       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:30.886468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:44.360895       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:44.362026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:46.161418       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:46.162688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:44:46.392929       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:44:46.394700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:45:16.952049       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:45:16.953370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:45:30.723439       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:45:30.724648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1009 18:45:34.820663       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1009 18:45:34.822145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [8a51d36cb84fae54c73c9a3c77f36e88c49d47f2b59fb81b94a28326dc982983] <==
	I1009 18:40:19.629431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 18:40:19.730263       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 18:40:19.730797       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.158"]
	E1009 18:40:19.731064       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:40:20.045300       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1009 18:40:20.045480       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 18:40:20.045514       1 server_linux.go:132] "Using iptables Proxier"
	I1009 18:40:20.083797       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:40:20.090424       1 server.go:527] "Version info" version="v1.34.1"
	I1009 18:40:20.091247       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:40:20.129819       1 config.go:309] "Starting node config controller"
	I1009 18:40:20.129851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 18:40:20.129859       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 18:40:20.131670       1 config.go:200] "Starting service config controller"
	I1009 18:40:20.131706       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 18:40:20.131731       1 config.go:106] "Starting endpoint slice config controller"
	I1009 18:40:20.131735       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 18:40:20.131776       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 18:40:20.131780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 18:40:20.233105       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 18:40:20.233143       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 18:40:20.233173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a20d7640d4c04da648fb04ca5a8b60deb8aa5e7676aefb4aacf011ad44567482] <==
	E1009 18:40:10.246651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1009 18:40:10.250921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 18:40:10.251050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 18:40:10.252048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 18:40:10.252246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 18:40:10.252321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 18:40:10.259088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1009 18:40:11.042277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1009 18:40:11.073132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1009 18:40:11.100174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1009 18:40:11.100235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1009 18:40:11.135559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1009 18:40:11.170420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1009 18:40:11.270441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1009 18:40:11.308986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1009 18:40:11.353331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1009 18:40:11.371782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1009 18:40:11.426568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1009 18:40:11.472041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1009 18:40:11.513194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1009 18:40:11.525648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1009 18:40:11.554133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1009 18:40:11.625517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1009 18:40:11.637439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1009 18:40:13.515253       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 18:44:31 addons-916037 kubelet[1513]: I1009 18:44:31.104224    1513 scope.go:117] "RemoveContainer" containerID="3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee"
	Oct 09 18:44:31 addons-916037 kubelet[1513]: E1009 18:44:31.105125    1513 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee\": container with ID starting with 3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee not found: ID does not exist" containerID="3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee"
	Oct 09 18:44:31 addons-916037 kubelet[1513]: I1009 18:44:31.105180    1513 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee"} err="failed to get container status \"3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee\": rpc error: code = NotFound desc = could not find container \"3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee\": container with ID starting with 3977d45ce59361712220a3a5b9844517e3b45414c28c5104d553b1ca84e315ee not found: ID does not exist"
	Oct 09 18:44:31 addons-916037 kubelet[1513]: I1009 18:44:31.194815    1513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc46b02a-7cc3-41e3-ab22-2f470321c55d" path="/var/lib/kubelet/pods/bc46b02a-7cc3-41e3-ab22-2f470321c55d/volumes"
	Oct 09 18:44:33 addons-916037 kubelet[1513]: E1009 18:44:33.674535    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035473674172372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:44:33 addons-916037 kubelet[1513]: E1009 18:44:33.674557    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035473674172372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:44:43 addons-916037 kubelet[1513]: E1009 18:44:43.678847    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035483678447114  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:44:43 addons-916037 kubelet[1513]: E1009 18:44:43.678936    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035483678447114  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:44:53 addons-916037 kubelet[1513]: E1009 18:44:53.681996    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035493681518736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:44:53 addons-916037 kubelet[1513]: E1009 18:44:53.682045    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035493681518736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:03 addons-916037 kubelet[1513]: E1009 18:45:03.685004    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035503684560722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:03 addons-916037 kubelet[1513]: E1009 18:45:03.685034    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035503684560722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:13 addons-916037 kubelet[1513]: E1009 18:45:13.688552    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035513687805534  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:13 addons-916037 kubelet[1513]: E1009 18:45:13.688588    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035513687805534  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:23 addons-916037 kubelet[1513]: E1009 18:45:23.692842    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035523691892678  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:23 addons-916037 kubelet[1513]: E1009 18:45:23.692869    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035523691892678  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:28 addons-916037 kubelet[1513]: I1009 18:45:28.193588    1513 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-67vlm" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:45:33 addons-916037 kubelet[1513]: E1009 18:45:33.696063    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035533695606717  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:33 addons-916037 kubelet[1513]: E1009 18:45:33.696088    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035533695606717  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:43 addons-916037 kubelet[1513]: E1009 18:45:43.699519    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035543698907477  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:43 addons-916037 kubelet[1513]: E1009 18:45:43.699615    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035543698907477  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:48 addons-916037 kubelet[1513]: I1009 18:45:48.190355    1513 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:45:53 addons-916037 kubelet[1513]: E1009 18:45:53.703180    1513 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760035553702794725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:53 addons-916037 kubelet[1513]: E1009 18:45:53.703245    1513 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760035553702794725  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598010}  inodes_used:{value:201}}"
	Oct 09 18:45:55 addons-916037 kubelet[1513]: I1009 18:45:55.073582    1513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9f4t\" (UniqueName: \"kubernetes.io/projected/d5bd21e8-d7a6-4fc2-b10b-f029186582ee-kube-api-access-w9f4t\") pod \"hello-world-app-5d498dc89-kd4wc\" (UID: \"d5bd21e8-d7a6-4fc2-b10b-f029186582ee\") " pod="default/hello-world-app-5d498dc89-kd4wc"
	
	
	==> storage-provisioner [37f21605afb4018635092483bd4710ea61635abe5d2f0653f60fd346a16579a7] <==
	W1009 18:45:32.626845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:34.630705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:34.636301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:36.639884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:36.645585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:38.649586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:38.658695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:40.662518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:40.667785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:42.673338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:42.679894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:44.683669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:44.688907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:46.693171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:46.701383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:48.708413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:48.716145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:50.719540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:50.724565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:52.728380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:52.735535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:54.739274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:54.745275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:56.748830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1009 18:45:56.756196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-916037 -n addons-916037
helpers_test.go:269: (dbg) Run:  kubectl --context addons-916037 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-kd4wc ingress-nginx-admission-create-lp67s ingress-nginx-admission-patch-rfmnn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-916037 describe pod hello-world-app-5d498dc89-kd4wc ingress-nginx-admission-create-lp67s ingress-nginx-admission-patch-rfmnn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-916037 describe pod hello-world-app-5d498dc89-kd4wc ingress-nginx-admission-create-lp67s ingress-nginx-admission-patch-rfmnn: exit status 1 (97.226234ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-kd4wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-916037/192.168.39.158
	Start Time:       Thu, 09 Oct 2025 18:45:54 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9f4t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w9f4t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-kd4wc to addons-916037
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lp67s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rfmnn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-916037 describe pod hello-world-app-5d498dc89-kd4wc ingress-nginx-admission-create-lp67s ingress-nginx-admission-patch-rfmnn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable ingress-dns --alsologtostderr -v=1: (1.859725395s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable ingress --alsologtostderr -v=1: (7.836429012s)
--- FAIL: TestAddons/parallel/Ingress (160.39s)

                                                
                                    
x
+
TestPreload (164.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-146992 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1009 19:32:59.344821  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-146992 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m31.924502712s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-146992 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-146992 image pull gcr.io/k8s-minikube/busybox: (3.520918927s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-146992
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-146992: (7.77255011s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-146992 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-146992 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.642996983s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-146992 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-09 19:35:18.106139669 +0000 UTC m=+3386.977440131
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-146992 -n test-preload-146992
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-146992 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-146992 logs -n 25: (1.145048222s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-396378 ssh -n multinode-396378-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:20 UTC │ 09 Oct 25 19:21 UTC │
	│ ssh     │ multinode-396378 ssh -n multinode-396378 sudo cat /home/docker/cp-test_multinode-396378-m03_multinode-396378.txt                                                                    │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ cp      │ multinode-396378 cp multinode-396378-m03:/home/docker/cp-test.txt multinode-396378-m02:/home/docker/cp-test_multinode-396378-m03_multinode-396378-m02.txt                           │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ ssh     │ multinode-396378 ssh -n multinode-396378-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ ssh     │ multinode-396378 ssh -n multinode-396378-m02 sudo cat /home/docker/cp-test_multinode-396378-m03_multinode-396378-m02.txt                                                            │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ node    │ multinode-396378 node stop m03                                                                                                                                                      │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ node    │ multinode-396378 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:21 UTC │
	│ node    │ list -p multinode-396378                                                                                                                                                            │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │                     │
	│ stop    │ -p multinode-396378                                                                                                                                                                 │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:21 UTC │ 09 Oct 25 19:24 UTC │
	│ start   │ -p multinode-396378 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:24 UTC │ 09 Oct 25 19:26 UTC │
	│ node    │ list -p multinode-396378                                                                                                                                                            │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │                     │
	│ node    │ multinode-396378 node delete m03                                                                                                                                                    │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:26 UTC │
	│ stop    │ multinode-396378 stop                                                                                                                                                               │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:26 UTC │ 09 Oct 25 19:29 UTC │
	│ start   │ -p multinode-396378 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:29 UTC │ 09 Oct 25 19:31 UTC │
	│ node    │ list -p multinode-396378                                                                                                                                                            │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:31 UTC │                     │
	│ start   │ -p multinode-396378-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-396378-m02 │ jenkins │ v1.37.0 │ 09 Oct 25 19:31 UTC │                     │
	│ start   │ -p multinode-396378-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-396378-m03 │ jenkins │ v1.37.0 │ 09 Oct 25 19:31 UTC │ 09 Oct 25 19:32 UTC │
	│ node    │ add -p multinode-396378                                                                                                                                                             │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:32 UTC │                     │
	│ delete  │ -p multinode-396378-m03                                                                                                                                                             │ multinode-396378-m03 │ jenkins │ v1.37.0 │ 09 Oct 25 19:32 UTC │ 09 Oct 25 19:32 UTC │
	│ delete  │ -p multinode-396378                                                                                                                                                                 │ multinode-396378     │ jenkins │ v1.37.0 │ 09 Oct 25 19:32 UTC │ 09 Oct 25 19:32 UTC │
	│ start   │ -p test-preload-146992 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-146992  │ jenkins │ v1.37.0 │ 09 Oct 25 19:32 UTC │ 09 Oct 25 19:34 UTC │
	│ image   │ test-preload-146992 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-146992  │ jenkins │ v1.37.0 │ 09 Oct 25 19:34 UTC │ 09 Oct 25 19:34 UTC │
	│ stop    │ -p test-preload-146992                                                                                                                                                              │ test-preload-146992  │ jenkins │ v1.37.0 │ 09 Oct 25 19:34 UTC │ 09 Oct 25 19:34 UTC │
	│ start   │ -p test-preload-146992 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-146992  │ jenkins │ v1.37.0 │ 09 Oct 25 19:34 UTC │ 09 Oct 25 19:35 UTC │
	│ image   │ test-preload-146992 image list                                                                                                                                                      │ test-preload-146992  │ jenkins │ v1.37.0 │ 09 Oct 25 19:35 UTC │ 09 Oct 25 19:35 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:34:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:34:19.279273  171452 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:34:19.279551  171452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:34:19.279573  171452 out.go:374] Setting ErrFile to fd 2...
	I1009 19:34:19.279577  171452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:34:19.279798  171452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:34:19.280293  171452 out.go:368] Setting JSON to false
	I1009 19:34:19.281158  171452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8199,"bootTime":1760030260,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:34:19.281275  171452 start.go:143] virtualization: kvm guest
	I1009 19:34:19.283603  171452 out.go:179] * [test-preload-146992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:34:19.284890  171452 notify.go:221] Checking for updates...
	I1009 19:34:19.284933  171452 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:34:19.286244  171452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:34:19.287566  171452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 19:34:19.289016  171452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 19:34:19.290261  171452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:34:19.291423  171452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:34:19.293204  171452 config.go:182] Loaded profile config "test-preload-146992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1009 19:34:19.293656  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:34:19.293709  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:34:19.307051  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I1009 19:34:19.307632  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:34:19.308247  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:34:19.308270  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:34:19.308624  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:34:19.308815  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:19.310640  171452 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1009 19:34:19.311980  171452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:34:19.312318  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:34:19.312360  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:34:19.325185  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I1009 19:34:19.325664  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:34:19.326080  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:34:19.326106  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:34:19.326409  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:34:19.326589  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:19.359221  171452 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 19:34:19.360243  171452 start.go:309] selected driver: kvm2
	I1009 19:34:19.360256  171452 start.go:930] validating driver "kvm2" against &{Name:test-preload-146992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-146992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:34:19.360344  171452 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:34:19.361011  171452 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:34:19.361074  171452 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:34:19.373475  171452 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:34:19.373496  171452 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:34:19.385605  171452 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:34:19.385929  171452 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:34:19.385954  171452 cni.go:84] Creating CNI manager for ""
	I1009 19:34:19.385996  171452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 19:34:19.386037  171452 start.go:353] cluster config:
	{Name:test-preload-146992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-146992 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:34:19.386129  171452 iso.go:125] acquiring lock: {Name:mk98a4af23a55ce5e8a323d2964def6dd3fc61ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:34:19.387762  171452 out.go:179] * Starting "test-preload-146992" primary control-plane node in "test-preload-146992" cluster
	I1009 19:34:19.388912  171452 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1009 19:34:19.862617  171452 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1009 19:34:19.862648  171452 cache.go:58] Caching tarball of preloaded images
	I1009 19:34:19.862803  171452 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1009 19:34:19.864634  171452 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1009 19:34:19.865692  171452 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 19:34:20.418605  171452 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1009 19:34:20.418653  171452 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1009 19:34:32.632044  171452 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1009 19:34:32.632215  171452 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/config.json ...
	I1009 19:34:32.632477  171452 start.go:361] acquireMachinesLock for test-preload-146992: {Name:mkb52a311831bedb463a7965f6666d89b7fa391a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:34:32.632579  171452 start.go:365] duration metric: took 55.12µs to acquireMachinesLock for "test-preload-146992"
	I1009 19:34:32.632604  171452 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:34:32.632613  171452 fix.go:55] fixHost starting: 
	I1009 19:34:32.632903  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:34:32.632959  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:34:32.646128  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I1009 19:34:32.646633  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:34:32.647022  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:34:32.647047  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:34:32.647409  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:34:32.647639  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:32.647796  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetState
	I1009 19:34:32.649498  171452 fix.go:113] recreateIfNeeded on test-preload-146992: state=Stopped err=<nil>
	I1009 19:34:32.649525  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	W1009 19:34:32.649676  171452 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:34:32.651725  171452 out.go:252] * Restarting existing kvm2 VM for "test-preload-146992" ...
	I1009 19:34:32.651757  171452 main.go:141] libmachine: (test-preload-146992) Calling .Start
	I1009 19:34:32.651923  171452 main.go:141] libmachine: (test-preload-146992) starting domain...
	I1009 19:34:32.651949  171452 main.go:141] libmachine: (test-preload-146992) ensuring networks are active...
	I1009 19:34:32.652671  171452 main.go:141] libmachine: (test-preload-146992) Ensuring network default is active
	I1009 19:34:32.653106  171452 main.go:141] libmachine: (test-preload-146992) Ensuring network mk-test-preload-146992 is active
	I1009 19:34:32.653530  171452 main.go:141] libmachine: (test-preload-146992) getting domain XML...
	I1009 19:34:32.654664  171452 main.go:141] libmachine: (test-preload-146992) DBG | starting domain XML:
	I1009 19:34:32.654680  171452 main.go:141] libmachine: (test-preload-146992) DBG | <domain type='kvm'>
	I1009 19:34:32.654690  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <name>test-preload-146992</name>
	I1009 19:34:32.654700  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <uuid>19d45dea-cf9e-467b-934c-d392ad5a109f</uuid>
	I1009 19:34:32.654712  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <memory unit='KiB'>3145728</memory>
	I1009 19:34:32.654719  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1009 19:34:32.654730  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <vcpu placement='static'>2</vcpu>
	I1009 19:34:32.654741  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <os>
	I1009 19:34:32.654755  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1009 19:34:32.654772  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <boot dev='cdrom'/>
	I1009 19:34:32.654810  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <boot dev='hd'/>
	I1009 19:34:32.654832  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <bootmenu enable='no'/>
	I1009 19:34:32.654845  171452 main.go:141] libmachine: (test-preload-146992) DBG |   </os>
	I1009 19:34:32.654853  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <features>
	I1009 19:34:32.654859  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <acpi/>
	I1009 19:34:32.654865  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <apic/>
	I1009 19:34:32.654873  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <pae/>
	I1009 19:34:32.654879  171452 main.go:141] libmachine: (test-preload-146992) DBG |   </features>
	I1009 19:34:32.654891  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1009 19:34:32.654899  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <clock offset='utc'/>
	I1009 19:34:32.654977  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <on_poweroff>destroy</on_poweroff>
	I1009 19:34:32.655011  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <on_reboot>restart</on_reboot>
	I1009 19:34:32.655024  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <on_crash>destroy</on_crash>
	I1009 19:34:32.655036  171452 main.go:141] libmachine: (test-preload-146992) DBG |   <devices>
	I1009 19:34:32.655047  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1009 19:34:32.655064  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <disk type='file' device='cdrom'>
	I1009 19:34:32.655074  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <driver name='qemu' type='raw'/>
	I1009 19:34:32.655088  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <source file='/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/boot2docker.iso'/>
	I1009 19:34:32.655103  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <target dev='hdc' bus='scsi'/>
	I1009 19:34:32.655113  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <readonly/>
	I1009 19:34:32.655125  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1009 19:34:32.655134  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </disk>
	I1009 19:34:32.655144  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <disk type='file' device='disk'>
	I1009 19:34:32.655158  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1009 19:34:32.655172  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <source file='/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/test-preload-146992.rawdisk'/>
	I1009 19:34:32.655189  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <target dev='hda' bus='virtio'/>
	I1009 19:34:32.655205  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1009 19:34:32.655212  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </disk>
	I1009 19:34:32.655224  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1009 19:34:32.655245  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1009 19:34:32.655254  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </controller>
	I1009 19:34:32.655262  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1009 19:34:32.655276  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1009 19:34:32.655289  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1009 19:34:32.655300  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </controller>
	I1009 19:34:32.655315  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <interface type='network'>
	I1009 19:34:32.655327  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <mac address='52:54:00:12:84:6f'/>
	I1009 19:34:32.655336  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <source network='mk-test-preload-146992'/>
	I1009 19:34:32.655342  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <model type='virtio'/>
	I1009 19:34:32.655355  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1009 19:34:32.655368  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </interface>
	I1009 19:34:32.655379  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <interface type='network'>
	I1009 19:34:32.655401  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <mac address='52:54:00:49:64:ea'/>
	I1009 19:34:32.655419  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <source network='default'/>
	I1009 19:34:32.655432  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <model type='virtio'/>
	I1009 19:34:32.655445  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1009 19:34:32.655465  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </interface>
	I1009 19:34:32.655476  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <serial type='pty'>
	I1009 19:34:32.655486  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <target type='isa-serial' port='0'>
	I1009 19:34:32.655498  171452 main.go:141] libmachine: (test-preload-146992) DBG |         <model name='isa-serial'/>
	I1009 19:34:32.655509  171452 main.go:141] libmachine: (test-preload-146992) DBG |       </target>
	I1009 19:34:32.655516  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </serial>
	I1009 19:34:32.655530  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <console type='pty'>
	I1009 19:34:32.655540  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <target type='serial' port='0'/>
	I1009 19:34:32.655545  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </console>
	I1009 19:34:32.655567  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <input type='mouse' bus='ps2'/>
	I1009 19:34:32.655593  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <input type='keyboard' bus='ps2'/>
	I1009 19:34:32.655616  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <audio id='1' type='none'/>
	I1009 19:34:32.655628  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <memballoon model='virtio'>
	I1009 19:34:32.655640  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1009 19:34:32.655648  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </memballoon>
	I1009 19:34:32.655658  171452 main.go:141] libmachine: (test-preload-146992) DBG |     <rng model='virtio'>
	I1009 19:34:32.655681  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <backend model='random'>/dev/random</backend>
	I1009 19:34:32.655698  171452 main.go:141] libmachine: (test-preload-146992) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1009 19:34:32.655710  171452 main.go:141] libmachine: (test-preload-146992) DBG |     </rng>
	I1009 19:34:32.655721  171452 main.go:141] libmachine: (test-preload-146992) DBG |   </devices>
	I1009 19:34:32.655728  171452 main.go:141] libmachine: (test-preload-146992) DBG | </domain>
	I1009 19:34:32.655737  171452 main.go:141] libmachine: (test-preload-146992) DBG | 
	I1009 19:34:33.898274  171452 main.go:141] libmachine: (test-preload-146992) waiting for domain to start...
	I1009 19:34:33.899620  171452 main.go:141] libmachine: (test-preload-146992) domain is now running
	I1009 19:34:33.899650  171452 main.go:141] libmachine: (test-preload-146992) waiting for IP...
	I1009 19:34:33.900364  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:33.900910  171452 main.go:141] libmachine: (test-preload-146992) found domain IP: 192.168.39.217
	I1009 19:34:33.900932  171452 main.go:141] libmachine: (test-preload-146992) reserving static IP address...
	I1009 19:34:33.900947  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has current primary IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:33.901380  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "test-preload-146992", mac: "52:54:00:12:84:6f", ip: "192.168.39.217"} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:32:52 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:33.901407  171452 main.go:141] libmachine: (test-preload-146992) reserved static IP address 192.168.39.217 for domain test-preload-146992
	I1009 19:34:33.901424  171452 main.go:141] libmachine: (test-preload-146992) DBG | skip adding static IP to network mk-test-preload-146992 - found existing host DHCP lease matching {name: "test-preload-146992", mac: "52:54:00:12:84:6f", ip: "192.168.39.217"}
	I1009 19:34:33.901439  171452 main.go:141] libmachine: (test-preload-146992) DBG | Getting to WaitForSSH function...
	I1009 19:34:33.901463  171452 main.go:141] libmachine: (test-preload-146992) waiting for SSH...
	I1009 19:34:33.903608  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:33.903940  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:32:52 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:33.903961  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:33.904124  171452 main.go:141] libmachine: (test-preload-146992) DBG | Using SSH client type: external
	I1009 19:34:33.904154  171452 main.go:141] libmachine: (test-preload-146992) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa (-rw-------)
	I1009 19:34:33.904195  171452 main.go:141] libmachine: (test-preload-146992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:34:33.904221  171452 main.go:141] libmachine: (test-preload-146992) DBG | About to run SSH command:
	I1009 19:34:33.904237  171452 main.go:141] libmachine: (test-preload-146992) DBG | exit 0
	I1009 19:34:45.192997  171452 main.go:141] libmachine: (test-preload-146992) DBG | SSH cmd err, output: exit status 255: 
	I1009 19:34:45.193027  171452 main.go:141] libmachine: (test-preload-146992) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1009 19:34:45.193037  171452 main.go:141] libmachine: (test-preload-146992) DBG | command : exit 0
	I1009 19:34:45.193044  171452 main.go:141] libmachine: (test-preload-146992) DBG | err     : exit status 255
	I1009 19:34:45.193055  171452 main.go:141] libmachine: (test-preload-146992) DBG | output  : 
	I1009 19:34:48.193551  171452 main.go:141] libmachine: (test-preload-146992) DBG | Getting to WaitForSSH function...
	I1009 19:34:48.196695  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.197116  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.197140  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.197319  171452 main.go:141] libmachine: (test-preload-146992) DBG | Using SSH client type: external
	I1009 19:34:48.197349  171452 main.go:141] libmachine: (test-preload-146992) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa (-rw-------)
	I1009 19:34:48.197371  171452 main.go:141] libmachine: (test-preload-146992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:34:48.197380  171452 main.go:141] libmachine: (test-preload-146992) DBG | About to run SSH command:
	I1009 19:34:48.197397  171452 main.go:141] libmachine: (test-preload-146992) DBG | exit 0
	I1009 19:34:48.331218  171452 main.go:141] libmachine: (test-preload-146992) DBG | SSH cmd err, output: <nil>: 
	I1009 19:34:48.331626  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetConfigRaw
	I1009 19:34:48.332247  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetIP
	I1009 19:34:48.335014  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.335385  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.335414  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.335669  171452 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/config.json ...
	I1009 19:34:48.335864  171452 machine.go:93] provisionDockerMachine start ...
	I1009 19:34:48.335884  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:48.336090  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:48.338662  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.339023  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.339048  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.339210  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:48.339405  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:48.339552  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:48.339738  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:48.339928  171452 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:48.340149  171452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1009 19:34:48.340159  171452 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:34:48.455495  171452 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 19:34:48.455523  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetMachineName
	I1009 19:34:48.455760  171452 buildroot.go:166] provisioning hostname "test-preload-146992"
	I1009 19:34:48.455785  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetMachineName
	I1009 19:34:48.456019  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:48.458607  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.458945  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.458982  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.459139  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:48.459324  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:48.459482  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:48.459636  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:48.459806  171452 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:48.460005  171452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1009 19:34:48.460017  171452 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-146992 && echo "test-preload-146992" | sudo tee /etc/hostname
	I1009 19:34:48.591684  171452 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-146992
	
	I1009 19:34:48.591713  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:48.594954  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.595333  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.595359  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.595584  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:48.595927  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:48.596104  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:48.596242  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:48.596389  171452 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:48.596645  171452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1009 19:34:48.596663  171452 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-146992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-146992/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-146992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:34:48.719794  171452 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:34:48.719821  171452 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-136449/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-136449/.minikube}
	I1009 19:34:48.719865  171452 buildroot.go:174] setting up certificates
	I1009 19:34:48.719874  171452 provision.go:84] configureAuth start
	I1009 19:34:48.719883  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetMachineName
	I1009 19:34:48.720179  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetIP
	I1009 19:34:48.723293  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.723730  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.723762  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.723946  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:48.726038  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.726340  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.726356  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.726519  171452 provision.go:143] copyHostCerts
	I1009 19:34:48.726593  171452 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem, removing ...
	I1009 19:34:48.726615  171452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem
	I1009 19:34:48.726698  171452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem (1675 bytes)
	I1009 19:34:48.726833  171452 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem, removing ...
	I1009 19:34:48.726842  171452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem
	I1009 19:34:48.726872  171452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem (1082 bytes)
	I1009 19:34:48.726927  171452 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem, removing ...
	I1009 19:34:48.726944  171452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem
	I1009 19:34:48.726971  171452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem (1123 bytes)
	I1009 19:34:48.727027  171452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem org=jenkins.test-preload-146992 san=[127.0.0.1 192.168.39.217 localhost minikube test-preload-146992]
	I1009 19:34:48.852022  171452 provision.go:177] copyRemoteCerts
	I1009 19:34:48.852100  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:34:48.852136  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:48.855172  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.855542  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:48.855595  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:48.855771  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:48.855978  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:48.856135  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:48.856250  171452 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa Username:docker}
	I1009 19:34:48.945267  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:34:48.974709  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 19:34:49.004220  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:34:49.037981  171452 provision.go:87] duration metric: took 318.059584ms to configureAuth
	I1009 19:34:49.038015  171452 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:34:49.038201  171452 config.go:182] Loaded profile config "test-preload-146992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1009 19:34:49.038294  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:49.041141  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.041494  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:49.041526  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.041758  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:49.041983  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:49.042159  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:49.042308  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:49.042449  171452 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:49.042722  171452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1009 19:34:49.042747  171452 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:34:49.287830  171452 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:34:49.287854  171452 machine.go:96] duration metric: took 951.978064ms to provisionDockerMachine
	I1009 19:34:49.287866  171452 start.go:294] postStartSetup for "test-preload-146992" (driver="kvm2")
	I1009 19:34:49.287875  171452 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:34:49.287892  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:49.288236  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:34:49.288275  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:49.291137  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.291616  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:49.291644  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.291838  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:49.292028  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:49.292176  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:49.292336  171452 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa Username:docker}
	I1009 19:34:49.380684  171452 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:34:49.385771  171452 info.go:137] Remote host: Buildroot 2025.02
	I1009 19:34:49.385798  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/addons for local assets ...
	I1009 19:34:49.385875  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/files for local assets ...
	I1009 19:34:49.385944  171452 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem -> 1403582.pem in /etc/ssl/certs
	I1009 19:34:49.386035  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:34:49.398384  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem --> /etc/ssl/certs/1403582.pem (1708 bytes)
	I1009 19:34:49.428486  171452 start.go:297] duration metric: took 140.605188ms for postStartSetup
	I1009 19:34:49.428528  171452 fix.go:57] duration metric: took 16.795917459s for fixHost
	I1009 19:34:49.428547  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:49.431619  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.432028  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:49.432070  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.432233  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:49.432441  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:49.432605  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:49.432786  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:49.432936  171452 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:49.433176  171452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1009 19:34:49.433191  171452 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:34:49.548083  171452 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760038489.501409937
	
	I1009 19:34:49.548103  171452 fix.go:217] guest clock: 1760038489.501409937
	I1009 19:34:49.548110  171452 fix.go:230] Guest: 2025-10-09 19:34:49.501409937 +0000 UTC Remote: 2025-10-09 19:34:49.428532392 +0000 UTC m=+30.186674363 (delta=72.877545ms)
	I1009 19:34:49.548147  171452 fix.go:201] guest clock delta is within tolerance: 72.877545ms
	I1009 19:34:49.548154  171452 start.go:84] releasing machines lock for "test-preload-146992", held for 16.915560985s
	I1009 19:34:49.548178  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:49.548407  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetIP
	I1009 19:34:49.551225  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.551661  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:49.551696  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.551849  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:49.552381  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:49.552577  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:34:49.552665  171452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:34:49.552720  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:49.552836  171452 ssh_runner.go:195] Run: cat /version.json
	I1009 19:34:49.552866  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:34:49.555910  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.555983  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.556351  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:49.556379  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.556409  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:49.556428  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:49.556574  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:49.556775  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:34:49.556775  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:49.556987  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:34:49.556994  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:49.557193  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:34:49.557202  171452 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa Username:docker}
	I1009 19:34:49.557336  171452 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa Username:docker}
	I1009 19:34:49.639455  171452 ssh_runner.go:195] Run: systemctl --version
	I1009 19:34:49.668022  171452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:34:49.812143  171452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:34:49.819536  171452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:34:49.819605  171452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:34:49.841632  171452 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:34:49.841652  171452 start.go:496] detecting cgroup driver to use...
	I1009 19:34:49.841716  171452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:34:49.861420  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:34:49.878933  171452 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:34:49.878979  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:34:49.895989  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:34:49.912388  171452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:34:50.058544  171452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:34:50.272194  171452 docker.go:234] disabling docker service ...
	I1009 19:34:50.272267  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:34:50.288363  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:34:50.303229  171452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:34:50.459795  171452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:34:50.599623  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:34:50.616515  171452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:34:50.644326  171452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:34:50.644399  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:34:50.659706  171452 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:34:50.659798  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:34:50.672863  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:34:50.685736  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:34:50.698353  171452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:34:50.711714  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:34:50.725099  171452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:34:50.749382  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:34:50.764639  171452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:34:50.775600  171452 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:34:50.775657  171452 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:34:50.795653  171452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:34:50.807598  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:34:50.946292  171452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:34:51.054242  171452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:34:51.054318  171452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:34:51.060196  171452 start.go:564] Will wait 60s for crictl version
	I1009 19:34:51.060256  171452 ssh_runner.go:195] Run: which crictl
	I1009 19:34:51.064821  171452 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:34:51.108724  171452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:34:51.108813  171452 ssh_runner.go:195] Run: crio --version
	I1009 19:34:51.143117  171452 ssh_runner.go:195] Run: crio --version
	I1009 19:34:51.180283  171452 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1009 19:34:51.181240  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetIP
	I1009 19:34:51.184377  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:51.184758  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:34:51.184788  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:34:51.185063  171452 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:34:51.189709  171452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:34:51.205286  171452 kubeadm.go:883] updating cluster {Name:test-preload-146992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-146992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:34:51.205401  171452 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1009 19:34:51.205444  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:34:51.246351  171452 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1009 19:34:51.246422  171452 ssh_runner.go:195] Run: which lz4
	I1009 19:34:51.251389  171452 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 19:34:51.256510  171452 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 19:34:51.256549  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1009 19:34:52.813208  171452 crio.go:462] duration metric: took 1.561843289s to copy over tarball
	I1009 19:34:52.813308  171452 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 19:34:54.530829  171452 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.717490283s)
	I1009 19:34:54.530857  171452 crio.go:469] duration metric: took 1.71761364s to extract the tarball
	I1009 19:34:54.530864  171452 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 19:34:54.573404  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:34:54.618189  171452 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:34:54.618221  171452 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:34:54.618229  171452 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.32.0 crio true true} ...
	I1009 19:34:54.618345  171452 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-146992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-146992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:34:54.618420  171452 ssh_runner.go:195] Run: crio config
	I1009 19:34:54.665413  171452 cni.go:84] Creating CNI manager for ""
	I1009 19:34:54.665476  171452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 19:34:54.665495  171452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:34:54.665520  171452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-146992 NodeName:test-preload-146992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:34:54.665738  171452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-146992"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:34:54.665801  171452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1009 19:34:54.678434  171452 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:34:54.678498  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:34:54.690040  171452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1009 19:34:54.710749  171452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:34:54.730864  171452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1009 19:34:54.751624  171452 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I1009 19:34:54.756043  171452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:34:54.771100  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:34:54.911288  171452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:34:54.932162  171452 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992 for IP: 192.168.39.217
	I1009 19:34:54.932183  171452 certs.go:195] generating shared ca certs ...
	I1009 19:34:54.932209  171452 certs.go:227] acquiring lock for ca certs: {Name:mkad58f6533e9a5aa8b52ac28f20029620803fc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:34:54.932361  171452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key
	I1009 19:34:54.932414  171452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key
	I1009 19:34:54.932428  171452 certs.go:257] generating profile certs ...
	I1009 19:34:54.932500  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/client.key
	I1009 19:34:54.932568  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/apiserver.key.68e861af
	I1009 19:34:54.932603  171452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/proxy-client.key
	I1009 19:34:54.932713  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/140358.pem (1338 bytes)
	W1009 19:34:54.932740  171452 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-136449/.minikube/certs/140358_empty.pem, impossibly tiny 0 bytes
	I1009 19:34:54.932749  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:34:54.932776  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:34:54.932797  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:34:54.932816  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem (1675 bytes)
	I1009 19:34:54.932856  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem (1708 bytes)
	I1009 19:34:54.933367  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:34:54.977580  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:34:55.006947  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:34:55.040181  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:34:55.071189  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 19:34:55.101773  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:34:55.131932  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:34:55.162093  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:34:55.192066  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:34:55.221495  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/140358.pem --> /usr/share/ca-certificates/140358.pem (1338 bytes)
	I1009 19:34:55.250707  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem --> /usr/share/ca-certificates/1403582.pem (1708 bytes)
	I1009 19:34:55.279850  171452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:34:55.300709  171452 ssh_runner.go:195] Run: openssl version
	I1009 19:34:55.307958  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:34:55.324110  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:34:55.329923  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:34:55.329970  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:34:55.337857  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:34:55.353680  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140358.pem && ln -fs /usr/share/ca-certificates/140358.pem /etc/ssl/certs/140358.pem"
	I1009 19:34:55.369615  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140358.pem
	I1009 19:34:55.375432  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:48 /usr/share/ca-certificates/140358.pem
	I1009 19:34:55.375498  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140358.pem
	I1009 19:34:55.383454  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140358.pem /etc/ssl/certs/51391683.0"
	I1009 19:34:55.399823  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403582.pem && ln -fs /usr/share/ca-certificates/1403582.pem /etc/ssl/certs/1403582.pem"
	I1009 19:34:55.416069  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403582.pem
	I1009 19:34:55.422418  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:48 /usr/share/ca-certificates/1403582.pem
	I1009 19:34:55.422503  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403582.pem
	I1009 19:34:55.430532  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403582.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:34:55.446639  171452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:34:55.452371  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:34:55.460666  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:34:55.468658  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:34:55.476545  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:34:55.484532  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:34:55.492498  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:34:55.500493  171452 kubeadm.go:400] StartCluster: {Name:test-preload-146992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-146992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:34:55.500577  171452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:34:55.500644  171452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:34:55.548008  171452 cri.go:89] found id: ""
	I1009 19:34:55.548084  171452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:34:55.561060  171452 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:34:55.561085  171452 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:34:55.561151  171452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:34:55.573621  171452 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:34:55.574059  171452 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-146992" does not appear in /home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 19:34:55.574209  171452 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-136449/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-146992" cluster setting kubeconfig missing "test-preload-146992" context setting]
	I1009 19:34:55.574507  171452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/kubeconfig: {Name:mk0cc9985a025be104fc679cfaab9640e2d88e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:34:55.575023  171452 kapi.go:59] client config for test-preload-146992: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/client.key", CAFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:34:55.575439  171452 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:34:55.575453  171452 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:34:55.575458  171452 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:34:55.575464  171452 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:34:55.575474  171452 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:34:55.575867  171452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:34:55.588579  171452 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.217
	I1009 19:34:55.588614  171452 kubeadm.go:1160] stopping kube-system containers ...
	I1009 19:34:55.588627  171452 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 19:34:55.588685  171452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:34:55.631617  171452 cri.go:89] found id: ""
	I1009 19:34:55.631690  171452 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 19:34:55.655800  171452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:34:55.668577  171452 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:34:55.668601  171452 kubeadm.go:157] found existing configuration files:
	
	I1009 19:34:55.668673  171452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:34:55.680158  171452 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:34:55.680244  171452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:34:55.693002  171452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:34:55.704938  171452 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:34:55.705009  171452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:34:55.718223  171452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:34:55.730679  171452 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:34:55.730742  171452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:34:55.744364  171452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:34:55.757158  171452 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:34:55.757244  171452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:34:55.770754  171452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:34:55.784513  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:34:55.842988  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:34:56.823986  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:34:57.083261  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:34:57.156785  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:34:57.262808  171452 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:34:57.262913  171452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:34:57.763744  171452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:34:58.263829  171452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:34:58.763032  171452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:34:59.263014  171452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:34:59.763345  171452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:34:59.789677  171452 api_server.go:72] duration metric: took 2.526886298s to wait for apiserver process to appear ...
	I1009 19:34:59.789715  171452 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:34:59.789735  171452 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1009 19:35:02.051753  171452 api_server.go:279] https://192.168.39.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:35:02.051806  171452 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:35:02.051821  171452 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1009 19:35:02.079409  171452 api_server.go:279] https://192.168.39.217:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 19:35:02.079444  171452 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 19:35:02.289816  171452 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1009 19:35:02.299397  171452 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:35:02.299431  171452 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:35:02.790389  171452 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1009 19:35:02.795113  171452 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:35:02.795165  171452 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:35:03.289798  171452 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1009 19:35:03.296675  171452 api_server.go:279] https://192.168.39.217:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 19:35:03.296713  171452 api_server.go:103] status: https://192.168.39.217:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 19:35:03.790461  171452 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1009 19:35:03.795145  171452 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I1009 19:35:03.802526  171452 api_server.go:141] control plane version: v1.32.0
	I1009 19:35:03.802572  171452 api_server.go:131] duration metric: took 4.012834479s to wait for apiserver health ...
	I1009 19:35:03.802583  171452 cni.go:84] Creating CNI manager for ""
	I1009 19:35:03.802589  171452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 19:35:03.804527  171452 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 19:35:03.805728  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 19:35:03.820328  171452 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 19:35:03.844986  171452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:35:03.852410  171452 system_pods.go:59] 7 kube-system pods found
	I1009 19:35:03.852461  171452 system_pods.go:61] "coredns-668d6bf9bc-pvhjc" [eadd752d-d1f8-4c96-bb3c-558d4689b824] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 19:35:03.852475  171452 system_pods.go:61] "etcd-test-preload-146992" [55400f09-b639-403b-87d1-82b7d0387c4e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:35:03.852487  171452 system_pods.go:61] "kube-apiserver-test-preload-146992" [5cb867be-0841-4a54-957f-3907e2f00211] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:35:03.852496  171452 system_pods.go:61] "kube-controller-manager-test-preload-146992" [d8741256-c0c9-4c68-9534-d8575f3d4f00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:35:03.852504  171452 system_pods.go:61] "kube-proxy-h29th" [fb40f601-5e8e-4e2e-a121-c8e74138f123] Running
	I1009 19:35:03.852512  171452 system_pods.go:61] "kube-scheduler-test-preload-146992" [4a6d7dfd-7a80-42f5-a56b-7b8ce9416da5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:35:03.852521  171452 system_pods.go:61] "storage-provisioner" [6d35bfd7-f17d-41f2-aef8-835aff67b98d] Running
	I1009 19:35:03.852530  171452 system_pods.go:74] duration metric: took 7.512964ms to wait for pod list to return data ...
	I1009 19:35:03.852543  171452 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:35:03.858720  171452 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:35:03.858764  171452 node_conditions.go:123] node cpu capacity is 2
	I1009 19:35:03.858781  171452 node_conditions.go:105] duration metric: took 6.231547ms to run NodePressure ...
	I1009 19:35:03.858853  171452 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 19:35:04.123431  171452 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1009 19:35:04.130207  171452 kubeadm.go:743] kubelet initialised
	I1009 19:35:04.130230  171452 kubeadm.go:744] duration metric: took 6.775679ms waiting for restarted kubelet to initialise ...
	I1009 19:35:04.130247  171452 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:35:04.160856  171452 ops.go:34] apiserver oom_adj: -16
	I1009 19:35:04.160887  171452 kubeadm.go:601] duration metric: took 8.599793377s to restartPrimaryControlPlane
	I1009 19:35:04.160896  171452 kubeadm.go:402] duration metric: took 8.660412149s to StartCluster
	I1009 19:35:04.160913  171452 settings.go:142] acquiring lock: {Name:mk9b9e0b3207d052c253a9ce8599048f2fcb59d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:35:04.160992  171452 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 19:35:04.161521  171452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/kubeconfig: {Name:mk0cc9985a025be104fc679cfaab9640e2d88e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:35:04.161796  171452 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:35:04.161850  171452 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:35:04.161962  171452 addons.go:69] Setting storage-provisioner=true in profile "test-preload-146992"
	I1009 19:35:04.161994  171452 addons.go:238] Setting addon storage-provisioner=true in "test-preload-146992"
	W1009 19:35:04.162003  171452 addons.go:247] addon storage-provisioner should already be in state true
	I1009 19:35:04.162003  171452 config.go:182] Loaded profile config "test-preload-146992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1009 19:35:04.162014  171452 addons.go:69] Setting default-storageclass=true in profile "test-preload-146992"
	I1009 19:35:04.162033  171452 host.go:66] Checking if "test-preload-146992" exists ...
	I1009 19:35:04.162058  171452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-146992"
	I1009 19:35:04.162370  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:35:04.162425  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:35:04.162435  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:35:04.162474  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:35:04.163934  171452 out.go:179] * Verifying Kubernetes components...
	I1009 19:35:04.165324  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:35:04.176887  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I1009 19:35:04.176977  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I1009 19:35:04.177378  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:35:04.177478  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:35:04.177888  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:35:04.177910  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:35:04.178042  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:35:04.178061  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:35:04.178363  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:35:04.178466  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:35:04.178609  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetState
	I1009 19:35:04.179031  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:35:04.179088  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:35:04.181196  171452 kapi.go:59] client config for test-preload-146992: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/client.key", CAFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:35:04.181572  171452 addons.go:238] Setting addon default-storageclass=true in "test-preload-146992"
	W1009 19:35:04.181596  171452 addons.go:247] addon default-storageclass should already be in state true
	I1009 19:35:04.181627  171452 host.go:66] Checking if "test-preload-146992" exists ...
	I1009 19:35:04.181993  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:35:04.182047  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:35:04.193621  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I1009 19:35:04.194130  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:35:04.194671  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:35:04.194704  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:35:04.195058  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:35:04.195250  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I1009 19:35:04.195269  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetState
	I1009 19:35:04.195626  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:35:04.196077  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:35:04.196102  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:35:04.196579  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:35:04.197126  171452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:35:04.197161  171452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:35:04.197427  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:35:04.203159  171452 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:35:04.204677  171452 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:35:04.204711  171452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:35:04.204743  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:35:04.208991  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:35:04.209604  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:35:04.209642  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:35:04.209889  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:35:04.210104  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:35:04.210280  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:35:04.210438  171452 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa Username:docker}
	I1009 19:35:04.212320  171452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
	I1009 19:35:04.212801  171452 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:35:04.213281  171452 main.go:141] libmachine: Using API Version  1
	I1009 19:35:04.213313  171452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:35:04.213649  171452 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:35:04.213854  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetState
	I1009 19:35:04.215666  171452 main.go:141] libmachine: (test-preload-146992) Calling .DriverName
	I1009 19:35:04.215868  171452 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:35:04.215886  171452 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:35:04.215906  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHHostname
	I1009 19:35:04.219409  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:35:04.219895  171452 main.go:141] libmachine: (test-preload-146992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:84:6f", ip: ""} in network mk-test-preload-146992: {Iface:virbr1 ExpiryTime:2025-10-09 20:34:44 +0000 UTC Type:0 Mac:52:54:00:12:84:6f Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:test-preload-146992 Clientid:01:52:54:00:12:84:6f}
	I1009 19:35:04.219925  171452 main.go:141] libmachine: (test-preload-146992) DBG | domain test-preload-146992 has defined IP address 192.168.39.217 and MAC address 52:54:00:12:84:6f in network mk-test-preload-146992
	I1009 19:35:04.220171  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHPort
	I1009 19:35:04.220359  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHKeyPath
	I1009 19:35:04.220513  171452 main.go:141] libmachine: (test-preload-146992) Calling .GetSSHUsername
	I1009 19:35:04.220680  171452 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/test-preload-146992/id_rsa Username:docker}
	I1009 19:35:04.422475  171452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:35:04.442161  171452 node_ready.go:35] waiting up to 6m0s for node "test-preload-146992" to be "Ready" ...
	I1009 19:35:04.650331  171452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:35:04.661769  171452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:35:04.844495  171452 main.go:141] libmachine: Making call to close driver server
	I1009 19:35:04.844544  171452 main.go:141] libmachine: (test-preload-146992) Calling .Close
	I1009 19:35:04.844882  171452 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:35:04.844901  171452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:35:04.844911  171452 main.go:141] libmachine: Making call to close driver server
	I1009 19:35:04.844920  171452 main.go:141] libmachine: (test-preload-146992) Calling .Close
	I1009 19:35:04.844918  171452 main.go:141] libmachine: (test-preload-146992) DBG | Closing plugin on server side
	I1009 19:35:04.845157  171452 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:35:04.845176  171452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:35:04.856689  171452 main.go:141] libmachine: Making call to close driver server
	I1009 19:35:04.856718  171452 main.go:141] libmachine: (test-preload-146992) Calling .Close
	I1009 19:35:04.857046  171452 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:35:04.857064  171452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:35:05.454254  171452 main.go:141] libmachine: Making call to close driver server
	I1009 19:35:05.454284  171452 main.go:141] libmachine: (test-preload-146992) Calling .Close
	I1009 19:35:05.454608  171452 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:35:05.454628  171452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:35:05.454638  171452 main.go:141] libmachine: Making call to close driver server
	I1009 19:35:05.454639  171452 main.go:141] libmachine: (test-preload-146992) DBG | Closing plugin on server side
	I1009 19:35:05.454646  171452 main.go:141] libmachine: (test-preload-146992) Calling .Close
	I1009 19:35:05.454880  171452 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:35:05.454906  171452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:35:05.454884  171452 main.go:141] libmachine: (test-preload-146992) DBG | Closing plugin on server side
	I1009 19:35:05.456833  171452 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1009 19:35:05.458095  171452 addons.go:514] duration metric: took 1.296237914s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1009 19:35:06.447090  171452 node_ready.go:57] node "test-preload-146992" has "Ready":"False" status (will retry)
	W1009 19:35:08.946477  171452 node_ready.go:57] node "test-preload-146992" has "Ready":"False" status (will retry)
	W1009 19:35:11.446438  171452 node_ready.go:57] node "test-preload-146992" has "Ready":"False" status (will retry)
	I1009 19:35:12.945449  171452 node_ready.go:49] node "test-preload-146992" is "Ready"
	I1009 19:35:12.945492  171452 node_ready.go:38] duration metric: took 8.503284202s for node "test-preload-146992" to be "Ready" ...
	I1009 19:35:12.945510  171452 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:35:12.945587  171452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:35:12.966399  171452 api_server.go:72] duration metric: took 8.804565386s to wait for apiserver process to appear ...
	I1009 19:35:12.966436  171452 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:35:12.966462  171452 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1009 19:35:12.973260  171452 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I1009 19:35:12.974298  171452 api_server.go:141] control plane version: v1.32.0
	I1009 19:35:12.974321  171452 api_server.go:131] duration metric: took 7.877195ms to wait for apiserver health ...
	I1009 19:35:12.974333  171452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:35:12.980634  171452 system_pods.go:59] 7 kube-system pods found
	I1009 19:35:12.980672  171452 system_pods.go:61] "coredns-668d6bf9bc-pvhjc" [eadd752d-d1f8-4c96-bb3c-558d4689b824] Running
	I1009 19:35:12.980684  171452 system_pods.go:61] "etcd-test-preload-146992" [55400f09-b639-403b-87d1-82b7d0387c4e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:35:12.980692  171452 system_pods.go:61] "kube-apiserver-test-preload-146992" [5cb867be-0841-4a54-957f-3907e2f00211] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:35:12.980700  171452 system_pods.go:61] "kube-controller-manager-test-preload-146992" [d8741256-c0c9-4c68-9534-d8575f3d4f00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:35:12.980704  171452 system_pods.go:61] "kube-proxy-h29th" [fb40f601-5e8e-4e2e-a121-c8e74138f123] Running
	I1009 19:35:12.980710  171452 system_pods.go:61] "kube-scheduler-test-preload-146992" [4a6d7dfd-7a80-42f5-a56b-7b8ce9416da5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:35:12.980721  171452 system_pods.go:61] "storage-provisioner" [6d35bfd7-f17d-41f2-aef8-835aff67b98d] Running
	I1009 19:35:12.980729  171452 system_pods.go:74] duration metric: took 6.38939ms to wait for pod list to return data ...
	I1009 19:35:12.980737  171452 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:35:12.983820  171452 default_sa.go:45] found service account: "default"
	I1009 19:35:12.983844  171452 default_sa.go:55] duration metric: took 3.101468ms for default service account to be created ...
	I1009 19:35:12.983852  171452 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:35:12.987033  171452 system_pods.go:86] 7 kube-system pods found
	I1009 19:35:12.987059  171452 system_pods.go:89] "coredns-668d6bf9bc-pvhjc" [eadd752d-d1f8-4c96-bb3c-558d4689b824] Running
	I1009 19:35:12.987088  171452 system_pods.go:89] "etcd-test-preload-146992" [55400f09-b639-403b-87d1-82b7d0387c4e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 19:35:12.987096  171452 system_pods.go:89] "kube-apiserver-test-preload-146992" [5cb867be-0841-4a54-957f-3907e2f00211] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 19:35:12.987103  171452 system_pods.go:89] "kube-controller-manager-test-preload-146992" [d8741256-c0c9-4c68-9534-d8575f3d4f00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 19:35:12.987112  171452 system_pods.go:89] "kube-proxy-h29th" [fb40f601-5e8e-4e2e-a121-c8e74138f123] Running
	I1009 19:35:12.987120  171452 system_pods.go:89] "kube-scheduler-test-preload-146992" [4a6d7dfd-7a80-42f5-a56b-7b8ce9416da5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 19:35:12.987126  171452 system_pods.go:89] "storage-provisioner" [6d35bfd7-f17d-41f2-aef8-835aff67b98d] Running
	I1009 19:35:12.987134  171452 system_pods.go:126] duration metric: took 3.276272ms to wait for k8s-apps to be running ...
	I1009 19:35:12.987141  171452 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:35:12.987192  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:35:13.006342  171452 system_svc.go:56] duration metric: took 19.189787ms WaitForService to wait for kubelet
	I1009 19:35:13.006377  171452 kubeadm.go:586] duration metric: took 8.844553404s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:35:13.006393  171452 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:35:13.009124  171452 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:35:13.009145  171452 node_conditions.go:123] node cpu capacity is 2
	I1009 19:35:13.009156  171452 node_conditions.go:105] duration metric: took 2.759008ms to run NodePressure ...
	I1009 19:35:13.009167  171452 start.go:242] waiting for startup goroutines ...
	I1009 19:35:13.009173  171452 start.go:247] waiting for cluster config update ...
	I1009 19:35:13.009183  171452 start.go:256] writing updated cluster config ...
	I1009 19:35:13.009499  171452 ssh_runner.go:195] Run: rm -f paused
	I1009 19:35:13.015375  171452 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:35:13.015824  171452 kapi.go:59] client config for test-preload-146992: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/profiles/test-preload-146992/client.key", CAFile:"/home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819c00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:35:13.018849  171452 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-pvhjc" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:13.024235  171452 pod_ready.go:94] pod "coredns-668d6bf9bc-pvhjc" is "Ready"
	I1009 19:35:13.024256  171452 pod_ready.go:86] duration metric: took 5.386608ms for pod "coredns-668d6bf9bc-pvhjc" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:13.026485  171452 pod_ready.go:83] waiting for pod "etcd-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:14.033722  171452 pod_ready.go:94] pod "etcd-test-preload-146992" is "Ready"
	I1009 19:35:14.033746  171452 pod_ready.go:86] duration metric: took 1.007237673s for pod "etcd-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:14.035914  171452 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	W1009 19:35:16.043633  171452 pod_ready.go:104] pod "kube-apiserver-test-preload-146992" is not "Ready", error: <nil>
	I1009 19:35:17.041793  171452 pod_ready.go:94] pod "kube-apiserver-test-preload-146992" is "Ready"
	I1009 19:35:17.041826  171452 pod_ready.go:86] duration metric: took 3.005891244s for pod "kube-apiserver-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:17.044053  171452 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:17.048657  171452 pod_ready.go:94] pod "kube-controller-manager-test-preload-146992" is "Ready"
	I1009 19:35:17.048685  171452 pod_ready.go:86] duration metric: took 4.600261ms for pod "kube-controller-manager-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:17.051158  171452 pod_ready.go:83] waiting for pod "kube-proxy-h29th" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:17.219112  171452 pod_ready.go:94] pod "kube-proxy-h29th" is "Ready"
	I1009 19:35:17.219149  171452 pod_ready.go:86] duration metric: took 167.96039ms for pod "kube-proxy-h29th" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:17.419459  171452 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:17.819730  171452 pod_ready.go:94] pod "kube-scheduler-test-preload-146992" is "Ready"
	I1009 19:35:17.819771  171452 pod_ready.go:86] duration metric: took 400.275579ms for pod "kube-scheduler-test-preload-146992" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:35:17.819788  171452 pod_ready.go:40] duration metric: took 4.804379646s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:35:17.862968  171452 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1009 19:35:17.864643  171452 out.go:203] 
	W1009 19:35:17.866173  171452 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1009 19:35:17.867622  171452 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1009 19:35:17.868776  171452 out.go:179] * Done! kubectl is now configured to use "test-preload-146992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.782022452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038518781942141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c111e19-6194-4905-b333-4d4ad619a8ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.782804708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57690ccd-1321-458d-808d-6f641e606b5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.782913010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57690ccd-1321-458d-808d-6f641e606b5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.783241921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f583ea3ba356868b47d4b1554d73f42aebbc450b363ff1e95635659fb6360270,PodSandboxId:eeeb2f4ee611e30b9696a4309623d9dd3a4231fe0c20fb6ec70fad5b80ff7333,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760038510232600269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pvhjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadd752d-d1f8-4c96-bb3c-558d4689b824,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2478a9f5ab2e3aa9e159d1b1af3170714a491ad6cc89baa69d8c8dd9bb49da,PodSandboxId:c6fc330294967172e78f1840a051d5e502840fa59892e1ccb3c7b57b71c1d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760038502607746245,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h29th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb40f601-5e8e-4e2e-a121-c8e74138f123,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fdd33d159e08e31e07b949b82cdf1f63311e452167e98357cd424cdb3dcb,PodSandboxId:3758f4cdb60bcaeda7a5548c7dc21d1138aa63315f97251b2e7ce01994ec8a33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760038502584890179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
35bfd7-f17d-41f2-aef8-835aff67b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b5ea1d8f851c8720b8ef80da01af905fd8fa1d8e53044c80b1cc668a3f88f0,PodSandboxId:142b92395cd1a579b3694e401bf28d894b9cec007b08c518534cbd4093776a47,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760038499370012794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4289b040a6a87258cdffb981aad63e4a,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8856fd869633c8fc0035dcee9147b47e66edb57e291ba8ce41a190cb19c709,PodSandboxId:88a4c08842c96998ff5afc8b52dafaca5c6cd69a97328a86de3ce08f44ab79b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760038499389721627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047b1ba09756428b9fd39112c358fd83,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29dea532b76bf67743e0f4c6ae97d558c181350e7af05d0a4081ba70ae58741e,PodSandboxId:8b081d5c79a7e1d24fba2bc12e9b5f2a8969fe3ae6f01f5870e18a9dcac0f4a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760038499348943817,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c579e01b4aef970745a1639d35965fb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed48ec15f7307ff8ce1d66b6866072c5b8ce25a65784ea8fae2a60327853035f,PodSandboxId:f51f2547ab43cc1da679a167778a7e017e0e5375c7ecf9b738597b8f2c6f3486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760038499328129382,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080b3e0d309f0cfdbf16969304a70b37,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57690ccd-1321-458d-808d-6f641e606b5a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.826440070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6e19caf-18b9-4881-bdde-98f7752fa765 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.826538549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6e19caf-18b9-4881-bdde-98f7752fa765 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.828114443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dadae8ae-3a8f-4e86-a43c-86828d604006 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.829204511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038518829120558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dadae8ae-3a8f-4e86-a43c-86828d604006 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.830031650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e312ed9-04a2-4af4-8d97-5a408e0b1a87 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.830275313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e312ed9-04a2-4af4-8d97-5a408e0b1a87 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.830507320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f583ea3ba356868b47d4b1554d73f42aebbc450b363ff1e95635659fb6360270,PodSandboxId:eeeb2f4ee611e30b9696a4309623d9dd3a4231fe0c20fb6ec70fad5b80ff7333,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760038510232600269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pvhjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadd752d-d1f8-4c96-bb3c-558d4689b824,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2478a9f5ab2e3aa9e159d1b1af3170714a491ad6cc89baa69d8c8dd9bb49da,PodSandboxId:c6fc330294967172e78f1840a051d5e502840fa59892e1ccb3c7b57b71c1d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760038502607746245,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h29th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb40f601-5e8e-4e2e-a121-c8e74138f123,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fdd33d159e08e31e07b949b82cdf1f63311e452167e98357cd424cdb3dcb,PodSandboxId:3758f4cdb60bcaeda7a5548c7dc21d1138aa63315f97251b2e7ce01994ec8a33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760038502584890179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
35bfd7-f17d-41f2-aef8-835aff67b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b5ea1d8f851c8720b8ef80da01af905fd8fa1d8e53044c80b1cc668a3f88f0,PodSandboxId:142b92395cd1a579b3694e401bf28d894b9cec007b08c518534cbd4093776a47,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760038499370012794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4289b040a6a87258cdffb981aad63e4a,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8856fd869633c8fc0035dcee9147b47e66edb57e291ba8ce41a190cb19c709,PodSandboxId:88a4c08842c96998ff5afc8b52dafaca5c6cd69a97328a86de3ce08f44ab79b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760038499389721627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047b1ba09756428b9fd39112c358fd83,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29dea532b76bf67743e0f4c6ae97d558c181350e7af05d0a4081ba70ae58741e,PodSandboxId:8b081d5c79a7e1d24fba2bc12e9b5f2a8969fe3ae6f01f5870e18a9dcac0f4a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760038499348943817,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c579e01b4aef970745a1639d35965fb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed48ec15f7307ff8ce1d66b6866072c5b8ce25a65784ea8fae2a60327853035f,PodSandboxId:f51f2547ab43cc1da679a167778a7e017e0e5375c7ecf9b738597b8f2c6f3486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760038499328129382,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080b3e0d309f0cfdbf16969304a70b37,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e312ed9-04a2-4af4-8d97-5a408e0b1a87 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.871472325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6ed7b52-d1d1-4dfc-bea0-d955b2a7d83f name=/runtime.v1.RuntimeService/Version
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.871575391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6ed7b52-d1d1-4dfc-bea0-d955b2a7d83f name=/runtime.v1.RuntimeService/Version
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.872946024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4693eae-6d9f-4ded-aa81-9ccb9e3a48d6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.873472687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038518873450583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4693eae-6d9f-4ded-aa81-9ccb9e3a48d6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.874061503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22618711-91b8-422e-8a44-c3d8220e9437 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.874113323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22618711-91b8-422e-8a44-c3d8220e9437 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.874356120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f583ea3ba356868b47d4b1554d73f42aebbc450b363ff1e95635659fb6360270,PodSandboxId:eeeb2f4ee611e30b9696a4309623d9dd3a4231fe0c20fb6ec70fad5b80ff7333,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760038510232600269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pvhjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadd752d-d1f8-4c96-bb3c-558d4689b824,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2478a9f5ab2e3aa9e159d1b1af3170714a491ad6cc89baa69d8c8dd9bb49da,PodSandboxId:c6fc330294967172e78f1840a051d5e502840fa59892e1ccb3c7b57b71c1d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760038502607746245,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h29th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb40f601-5e8e-4e2e-a121-c8e74138f123,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fdd33d159e08e31e07b949b82cdf1f63311e452167e98357cd424cdb3dcb,PodSandboxId:3758f4cdb60bcaeda7a5548c7dc21d1138aa63315f97251b2e7ce01994ec8a33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760038502584890179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
35bfd7-f17d-41f2-aef8-835aff67b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b5ea1d8f851c8720b8ef80da01af905fd8fa1d8e53044c80b1cc668a3f88f0,PodSandboxId:142b92395cd1a579b3694e401bf28d894b9cec007b08c518534cbd4093776a47,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760038499370012794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4289b040a6a87258cdffb981aad63e4a,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8856fd869633c8fc0035dcee9147b47e66edb57e291ba8ce41a190cb19c709,PodSandboxId:88a4c08842c96998ff5afc8b52dafaca5c6cd69a97328a86de3ce08f44ab79b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760038499389721627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047b1ba09756428b9fd39112c358fd83,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29dea532b76bf67743e0f4c6ae97d558c181350e7af05d0a4081ba70ae58741e,PodSandboxId:8b081d5c79a7e1d24fba2bc12e9b5f2a8969fe3ae6f01f5870e18a9dcac0f4a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760038499348943817,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c579e01b4aef970745a1639d35965fb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed48ec15f7307ff8ce1d66b6866072c5b8ce25a65784ea8fae2a60327853035f,PodSandboxId:f51f2547ab43cc1da679a167778a7e017e0e5375c7ecf9b738597b8f2c6f3486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760038499328129382,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080b3e0d309f0cfdbf16969304a70b37,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22618711-91b8-422e-8a44-c3d8220e9437 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.911952907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4bc173f1-771d-47af-b794-c38cc28ef52b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.912021363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4bc173f1-771d-47af-b794-c38cc28ef52b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.913553689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e987cd0a-98e0-47c4-ac44-f0db3afeff29 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.913958402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038518913938196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e987cd0a-98e0-47c4-ac44-f0db3afeff29 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.914730255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6909cff-0285-476a-8505-70e640d64b09 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.915027765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6909cff-0285-476a-8505-70e640d64b09 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:35:18 test-preload-146992 crio[832]: time="2025-10-09 19:35:18.915571284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f583ea3ba356868b47d4b1554d73f42aebbc450b363ff1e95635659fb6360270,PodSandboxId:eeeb2f4ee611e30b9696a4309623d9dd3a4231fe0c20fb6ec70fad5b80ff7333,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760038510232600269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-pvhjc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eadd752d-d1f8-4c96-bb3c-558d4689b824,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2478a9f5ab2e3aa9e159d1b1af3170714a491ad6cc89baa69d8c8dd9bb49da,PodSandboxId:c6fc330294967172e78f1840a051d5e502840fa59892e1ccb3c7b57b71c1d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760038502607746245,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h29th,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb40f601-5e8e-4e2e-a121-c8e74138f123,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fdd33d159e08e31e07b949b82cdf1f63311e452167e98357cd424cdb3dcb,PodSandboxId:3758f4cdb60bcaeda7a5548c7dc21d1138aa63315f97251b2e7ce01994ec8a33,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760038502584890179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
35bfd7-f17d-41f2-aef8-835aff67b98d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3b5ea1d8f851c8720b8ef80da01af905fd8fa1d8e53044c80b1cc668a3f88f0,PodSandboxId:142b92395cd1a579b3694e401bf28d894b9cec007b08c518534cbd4093776a47,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760038499370012794,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4289b040a6a87258cdffb981aad63e4a,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af8856fd869633c8fc0035dcee9147b47e66edb57e291ba8ce41a190cb19c709,PodSandboxId:88a4c08842c96998ff5afc8b52dafaca5c6cd69a97328a86de3ce08f44ab79b4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760038499389721627,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047b1ba09756428b9fd39112c358fd83,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29dea532b76bf67743e0f4c6ae97d558c181350e7af05d0a4081ba70ae58741e,PodSandboxId:8b081d5c79a7e1d24fba2bc12e9b5f2a8969fe3ae6f01f5870e18a9dcac0f4a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760038499348943817,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c579e01b4aef970745a1639d35965fb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed48ec15f7307ff8ce1d66b6866072c5b8ce25a65784ea8fae2a60327853035f,PodSandboxId:f51f2547ab43cc1da679a167778a7e017e0e5375c7ecf9b738597b8f2c6f3486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760038499328129382,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-146992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 080b3e0d309f0cfdbf16969304a70b37,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6909cff-0285-476a-8505-70e640d64b09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f583ea3ba3568       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   eeeb2f4ee611e       coredns-668d6bf9bc-pvhjc
	ad2478a9f5ab2       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   c6fc330294967       kube-proxy-h29th
	82b0fdd33d159       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       2                   3758f4cdb60bc       storage-provisioner
	af8856fd86963       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   88a4c08842c96       kube-scheduler-test-preload-146992
	e3b5ea1d8f851       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   142b92395cd1a       etcd-test-preload-146992
	29dea532b76bf       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   8b081d5c79a7e       kube-controller-manager-test-preload-146992
	ed48ec15f7307       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   f51f2547ab43c       kube-apiserver-test-preload-146992
	
	
	==> coredns [f583ea3ba356868b47d4b1554d73f42aebbc450b363ff1e95635659fb6360270] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49613 - 55329 "HINFO IN 3020721897771321097.820718655439849768. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.032595703s
	
	
	==> describe nodes <==
	Name:               test-preload-146992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-146992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=test-preload-146992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_33_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:33:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-146992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:35:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:35:12 +0000   Thu, 09 Oct 2025 19:33:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:35:12 +0000   Thu, 09 Oct 2025 19:33:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:35:12 +0000   Thu, 09 Oct 2025 19:33:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:35:12 +0000   Thu, 09 Oct 2025 19:35:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    test-preload-146992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 19d45deacf9e467b934cd392ad5a109f
	  System UUID:                19d45dea-cf9e-467b-934c-d392ad5a109f
	  Boot ID:                    c4beadab-afd4-4ca2-af5f-2a49083f486b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-pvhjc                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-test-preload-146992                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         111s
	  kube-system                 kube-apiserver-test-preload-146992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-test-preload-146992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-h29th                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-test-preload-146992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 104s                 kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  117s (x8 over 118s)  kubelet          Node test-preload-146992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 118s)  kubelet          Node test-preload-146992 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 118s)  kubelet          Node test-preload-146992 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 112s                 kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node test-preload-146992 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node test-preload-146992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node test-preload-146992 status is now: NodeHasSufficientPID
	  Normal   NodeReady                111s                 kubelet          Node test-preload-146992 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           107s                 node-controller  Node test-preload-146992 event: Registered Node test-preload-146992 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-146992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-146992 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-146992 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-146992 has been rebooted, boot id: c4beadab-afd4-4ca2-af5f-2a49083f486b
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-146992 event: Registered Node test-preload-146992 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:34] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000032] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001847] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006941] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.974321] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085549] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.109157] kauditd_printk_skb: 102 callbacks suppressed
	[Oct 9 19:35] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000060] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.944827] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [e3b5ea1d8f851c8720b8ef80da01af905fd8fa1d8e53044c80b1cc668a3f88f0] <==
	{"level":"info","ts":"2025-10-09T19:35:00.038148Z","caller":"etcdserver/server.go:757","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"a09c9983ac28f1fd","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-09T19:35:00.042092Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2025-10-09T19:35:00.042148Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2025-10-09T19:35:00.037514Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-09T19:35:00.042650Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"a09c9983ac28f1fd","initial-advertise-peer-urls":["https://192.168.39.217:2380"],"listen-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-09T19:35:00.042703Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-09T19:35:00.042901Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:35:00.042974Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:35:00.043004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-09T19:35:00.152547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-09T19:35:00.152585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-09T19:35:00.152613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgPreVoteResp from a09c9983ac28f1fd at term 2"}
	{"level":"info","ts":"2025-10-09T19:35:00.152802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became candidate at term 3"}
	{"level":"info","ts":"2025-10-09T19:35:00.152829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd received MsgVoteResp from a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2025-10-09T19:35:00.152849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd became leader at term 3"}
	{"level":"info","ts":"2025-10-09T19:35:00.152867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a09c9983ac28f1fd elected leader a09c9983ac28f1fd at term 3"}
	{"level":"info","ts":"2025-10-09T19:35:00.156159Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"a09c9983ac28f1fd","local-member-attributes":"{Name:test-preload-146992 ClientURLs:[https://192.168.39.217:2379]}","request-path":"/0/members/a09c9983ac28f1fd/attributes","cluster-id":"8f39477865362797","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-09T19:35:00.156285Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:35:00.156461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-09T19:35:00.156840Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-09T19:35:00.156876Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-09T19:35:00.157403Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-09T19:35:00.157985Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-09T19:35:00.159946Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-09T19:35:00.158055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.217:2379"}
	
	
	==> kernel <==
	 19:35:19 up 0 min,  0 users,  load average: 0.61, 0.17, 0.06
	Linux test-preload-146992 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ed48ec15f7307ff8ce1d66b6866072c5b8ce25a65784ea8fae2a60327853035f] <==
	I1009 19:35:02.061229       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:35:02.061234       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:35:02.102621       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1009 19:35:02.125539       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1009 19:35:02.125636       1 policy_source.go:240] refreshing policies
	I1009 19:35:02.152045       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 19:35:02.152404       1 shared_informer.go:320] Caches are synced for configmaps
	I1009 19:35:02.152881       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1009 19:35:02.155274       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:35:02.155977       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 19:35:02.156081       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 19:35:02.156413       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:35:02.163691       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:35:02.164730       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:35:02.179251       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:35:02.179827       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E1009 19:35:02.217289       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:35:02.958095       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:35:03.930916       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 19:35:03.976596       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 19:35:04.026439       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:35:04.039180       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:35:05.342882       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:35:05.394703       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1009 19:35:05.715017       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [29dea532b76bf67743e0f4c6ae97d558c181350e7af05d0a4081ba70ae58741e] <==
	I1009 19:35:05.311015       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 19:35:05.311199       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-146992"
	I1009 19:35:05.311287       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1009 19:35:05.315418       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 19:35:05.315455       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:35:05.315466       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:35:05.316016       1 shared_informer.go:320] Caches are synced for ephemeral
	I1009 19:35:05.326582       1 shared_informer.go:320] Caches are synced for TTL
	I1009 19:35:05.326671       1 shared_informer.go:320] Caches are synced for endpoint
	I1009 19:35:05.327206       1 shared_informer.go:320] Caches are synced for node
	I1009 19:35:05.328064       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1009 19:35:05.328097       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 19:35:05.328103       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1009 19:35:05.328111       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1009 19:35:05.328227       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-146992"
	I1009 19:35:05.343390       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 19:35:05.389806       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-146992"
	I1009 19:35:05.407548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="162.020757ms"
	I1009 19:35:05.408080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="118.597µs"
	I1009 19:35:10.345057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="71.373µs"
	I1009 19:35:11.369186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="17.523761ms"
	I1009 19:35:11.369286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="72.649µs"
	I1009 19:35:12.544401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-146992"
	I1009 19:35:12.565559       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-146992"
	I1009 19:35:15.313710       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad2478a9f5ab2e3aa9e159d1b1af3170714a491ad6cc89baa69d8c8dd9bb49da] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 19:35:02.807558       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 19:35:02.819945       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	E1009 19:35:02.820097       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:35:02.861103       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1009 19:35:02.861204       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:35:02.861240       1 server_linux.go:170] "Using iptables Proxier"
	I1009 19:35:02.864766       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:35:02.865540       1 server.go:497] "Version info" version="v1.32.0"
	I1009 19:35:02.865594       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:35:02.867812       1 config.go:199] "Starting service config controller"
	I1009 19:35:02.867878       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 19:35:02.867936       1 config.go:105] "Starting endpoint slice config controller"
	I1009 19:35:02.867952       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 19:35:02.868607       1 config.go:329] "Starting node config controller"
	I1009 19:35:02.868653       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 19:35:02.968410       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 19:35:02.968458       1 shared_informer.go:320] Caches are synced for service config
	I1009 19:35:02.969648       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [af8856fd869633c8fc0035dcee9147b47e66edb57e291ba8ce41a190cb19c709] <==
	I1009 19:35:00.732692       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:35:02.019282       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:35:02.019361       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:35:02.019372       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:35:02.019411       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:35:02.093805       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1009 19:35:02.093853       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:35:02.095955       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:35:02.095989       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 19:35:02.095991       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1009 19:35:02.096113       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:35:02.196687       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: E1009 19:35:02.171997    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume podName:eadd752d-d1f8-4c96-bb3c-558d4689b824 nodeName:}" failed. No retries permitted until 2025-10-09 19:35:02.671968665 +0000 UTC m=+5.641377189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume") pod "coredns-668d6bf9bc-pvhjc" (UID: "eadd752d-d1f8-4c96-bb3c-558d4689b824") : object "kube-system"/"coredns" not registered
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: E1009 19:35:02.230963    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-146992\" already exists" pod="kube-system/kube-apiserver-test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.232802    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.235957    1152 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.236067    1152 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.236093    1152 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.238624    1152 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.240288    1152 setters.go:602] "Node became not ready" node="test-preload-146992" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-09T19:35:02Z","lastTransitionTime":"2025-10-09T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: E1009 19:35:02.255217    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-146992\" already exists" pod="kube-system/kube-controller-manager-test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.255250    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: E1009 19:35:02.277975    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-146992\" already exists" pod="kube-system/kube-scheduler-test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: I1009 19:35:02.277997    1152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: E1009 19:35:02.288970    1152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-146992\" already exists" pod="kube-system/etcd-test-preload-146992"
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: E1009 19:35:02.676503    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 19:35:02 test-preload-146992 kubelet[1152]: E1009 19:35:02.677232    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume podName:eadd752d-d1f8-4c96-bb3c-558d4689b824 nodeName:}" failed. No retries permitted until 2025-10-09 19:35:03.677139027 +0000 UTC m=+6.646547539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume") pod "coredns-668d6bf9bc-pvhjc" (UID: "eadd752d-d1f8-4c96-bb3c-558d4689b824") : object "kube-system"/"coredns" not registered
	Oct 09 19:35:03 test-preload-146992 kubelet[1152]: E1009 19:35:03.682526    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 19:35:03 test-preload-146992 kubelet[1152]: E1009 19:35:03.682618    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume podName:eadd752d-d1f8-4c96-bb3c-558d4689b824 nodeName:}" failed. No retries permitted until 2025-10-09 19:35:05.682603976 +0000 UTC m=+8.652012500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume") pod "coredns-668d6bf9bc-pvhjc" (UID: "eadd752d-d1f8-4c96-bb3c-558d4689b824") : object "kube-system"/"coredns" not registered
	Oct 09 19:35:04 test-preload-146992 kubelet[1152]: E1009 19:35:04.181712    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-pvhjc" podUID="eadd752d-d1f8-4c96-bb3c-558d4689b824"
	Oct 09 19:35:05 test-preload-146992 kubelet[1152]: E1009 19:35:05.701792    1152 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 19:35:05 test-preload-146992 kubelet[1152]: E1009 19:35:05.701894    1152 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume podName:eadd752d-d1f8-4c96-bb3c-558d4689b824 nodeName:}" failed. No retries permitted until 2025-10-09 19:35:09.701875015 +0000 UTC m=+12.671283529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eadd752d-d1f8-4c96-bb3c-558d4689b824-config-volume") pod "coredns-668d6bf9bc-pvhjc" (UID: "eadd752d-d1f8-4c96-bb3c-558d4689b824") : object "kube-system"/"coredns" not registered
	Oct 09 19:35:06 test-preload-146992 kubelet[1152]: E1009 19:35:06.181405    1152 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-pvhjc" podUID="eadd752d-d1f8-4c96-bb3c-558d4689b824"
	Oct 09 19:35:07 test-preload-146992 kubelet[1152]: E1009 19:35:07.234780    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038507233735406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:35:07 test-preload-146992 kubelet[1152]: E1009 19:35:07.234829    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038507233735406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:35:17 test-preload-146992 kubelet[1152]: E1009 19:35:17.236191    1152 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038517235780974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:35:17 test-preload-146992 kubelet[1152]: E1009 19:35:17.236235    1152 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760038517235780974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [82b0fdd33d159e08e31e07b949b82cdf1f63311e452167e98357cd424cdb3dcb] <==
	I1009 19:35:02.698674       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-146992 -n test-preload-146992
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-146992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-146992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-146992
--- FAIL: TestPreload (164.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (81.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-612343 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:42:59.344981  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-612343 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.339360146s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-612343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-612343" primary control-plane node in "pause-612343" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-612343" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:42:53.124837  180627 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:42:53.125140  180627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:53.125152  180627 out.go:374] Setting ErrFile to fd 2...
	I1009 19:42:53.125159  180627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:53.125470  180627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:42:53.126093  180627 out.go:368] Setting JSON to false
	I1009 19:42:53.127368  180627 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8713,"bootTime":1760030260,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:42:53.127501  180627 start.go:143] virtualization: kvm guest
	I1009 19:42:53.129473  180627 out.go:179] * [pause-612343] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:42:53.130703  180627 notify.go:221] Checking for updates...
	I1009 19:42:53.130722  180627 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:42:53.131837  180627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:42:53.133050  180627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 19:42:53.134019  180627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 19:42:53.135015  180627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:42:53.136045  180627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:42:53.137429  180627 config.go:182] Loaded profile config "pause-612343": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:42:53.137865  180627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:53.137916  180627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:53.153339  180627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I1009 19:42:53.154174  180627 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:53.154777  180627 main.go:141] libmachine: Using API Version  1
	I1009 19:42:53.154801  180627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:53.155451  180627 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:53.155652  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:42:53.156035  180627 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:42:53.156487  180627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:53.156552  180627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:53.171775  180627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I1009 19:42:53.172234  180627 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:53.172715  180627 main.go:141] libmachine: Using API Version  1
	I1009 19:42:53.172741  180627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:53.173101  180627 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:53.173280  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:42:53.850692  180627 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 19:42:53.851735  180627 start.go:309] selected driver: kvm2
	I1009 19:42:53.851757  180627 start.go:930] validating driver "kvm2" against &{Name:pause-612343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-612343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:53.851961  180627 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:42:53.852427  180627 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:42:53.852506  180627 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:42:53.866833  180627 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:42:53.866873  180627 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:42:53.881253  180627 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:42:53.882472  180627 cni.go:84] Creating CNI manager for ""
	I1009 19:42:53.882544  180627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 19:42:53.882652  180627 start.go:353] cluster config:
	{Name:pause-612343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-612343 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:42:53.882860  180627 iso.go:125] acquiring lock: {Name:mk98a4af23a55ce5e8a323d2964def6dd3fc61ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:42:53.884784  180627 out.go:179] * Starting "pause-612343" primary control-plane node in "pause-612343" cluster
	I1009 19:42:53.885806  180627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:42:53.885844  180627 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:42:53.885855  180627 cache.go:58] Caching tarball of preloaded images
	I1009 19:42:53.885953  180627 preload.go:233] Found /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:42:53.885967  180627 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:42:53.886085  180627 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/config.json ...
	I1009 19:42:53.886358  180627 start.go:361] acquireMachinesLock for pause-612343: {Name:mkb52a311831bedb463a7965f6666d89b7fa391a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:43:11.773013  180627 start.go:365] duration metric: took 17.886593638s to acquireMachinesLock for "pause-612343"
	I1009 19:43:11.773096  180627 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:43:11.773120  180627 fix.go:55] fixHost starting: 
	I1009 19:43:11.773509  180627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:43:11.773575  180627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:43:11.792232  180627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I1009 19:43:11.792852  180627 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:43:11.793346  180627 main.go:141] libmachine: Using API Version  1
	I1009 19:43:11.793379  180627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:43:11.793792  180627 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:43:11.794017  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:43:11.794178  180627 main.go:141] libmachine: (pause-612343) Calling .GetState
	I1009 19:43:11.796055  180627 fix.go:113] recreateIfNeeded on pause-612343: state=Running err=<nil>
	W1009 19:43:11.796083  180627 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:43:11.798241  180627 out.go:252] * Updating the running kvm2 "pause-612343" VM ...
	I1009 19:43:11.798281  180627 machine.go:93] provisionDockerMachine start ...
	I1009 19:43:11.798301  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:43:11.798527  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:11.801801  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:11.802365  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:11.802393  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:11.802635  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:11.802825  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:11.803037  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:11.803286  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:11.803479  180627 main.go:141] libmachine: Using SSH client type: native
	I1009 19:43:11.803776  180627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I1009 19:43:11.803792  180627 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:43:11.925152  180627 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-612343
	
	I1009 19:43:11.925185  180627 main.go:141] libmachine: (pause-612343) Calling .GetMachineName
	I1009 19:43:11.925462  180627 buildroot.go:166] provisioning hostname "pause-612343"
	I1009 19:43:11.925497  180627 main.go:141] libmachine: (pause-612343) Calling .GetMachineName
	I1009 19:43:11.925747  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:11.929734  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:11.930227  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:11.930257  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:11.930467  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:11.930686  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:11.930899  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:11.931060  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:11.931258  180627 main.go:141] libmachine: Using SSH client type: native
	I1009 19:43:11.931458  180627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I1009 19:43:11.931470  180627 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-612343 && echo "pause-612343" | sudo tee /etc/hostname
	I1009 19:43:12.074358  180627 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-612343
	
	I1009 19:43:12.074394  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:12.078328  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.078832  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:12.078882  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.079062  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:12.079301  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:12.079553  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:12.079763  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:12.079973  180627 main.go:141] libmachine: Using SSH client type: native
	I1009 19:43:12.080214  180627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I1009 19:43:12.080235  180627 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-612343' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-612343/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-612343' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:43:12.200403  180627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:43:12.200433  180627 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-136449/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-136449/.minikube}
	I1009 19:43:12.200468  180627 buildroot.go:174] setting up certificates
	I1009 19:43:12.200480  180627 provision.go:84] configureAuth start
	I1009 19:43:12.200489  180627 main.go:141] libmachine: (pause-612343) Calling .GetMachineName
	I1009 19:43:12.200808  180627 main.go:141] libmachine: (pause-612343) Calling .GetIP
	I1009 19:43:12.204399  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.204875  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:12.204917  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.205102  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:12.207981  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.208424  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:12.208454  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.208565  180627 provision.go:143] copyHostCerts
	I1009 19:43:12.208622  180627 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem, removing ...
	I1009 19:43:12.208639  180627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem
	I1009 19:43:12.208700  180627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem (1123 bytes)
	I1009 19:43:12.208814  180627 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem, removing ...
	I1009 19:43:12.208822  180627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem
	I1009 19:43:12.208846  180627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem (1675 bytes)
	I1009 19:43:12.208984  180627 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem, removing ...
	I1009 19:43:12.208993  180627 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem
	I1009 19:43:12.209021  180627 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem (1082 bytes)
	I1009 19:43:12.209089  180627 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem org=jenkins.pause-612343 san=[127.0.0.1 192.168.72.79 localhost minikube pause-612343]
	I1009 19:43:12.610800  180627 provision.go:177] copyRemoteCerts
	I1009 19:43:12.610877  180627 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:43:12.610908  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:12.614446  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.614934  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:12.614967  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.615187  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:12.615426  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:12.615620  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:12.615768  180627 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/pause-612343/id_rsa Username:docker}
	I1009 19:43:12.710373  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:43:12.753388  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1009 19:43:12.792697  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:43:12.833180  180627 provision.go:87] duration metric: took 632.688239ms to configureAuth
	I1009 19:43:12.833209  180627 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:43:12.833445  180627 config.go:182] Loaded profile config "pause-612343": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:43:12.833541  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:12.836978  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.837424  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:12.837445  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:12.837733  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:12.837958  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:12.838167  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:12.838293  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:12.838447  180627 main.go:141] libmachine: Using SSH client type: native
	I1009 19:43:12.838706  180627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I1009 19:43:12.838722  180627 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:43:18.423684  180627 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:43:18.423711  180627 machine.go:96] duration metric: took 6.625420742s to provisionDockerMachine
	I1009 19:43:18.423725  180627 start.go:294] postStartSetup for "pause-612343" (driver="kvm2")
	I1009 19:43:18.423737  180627 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:43:18.423759  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:43:18.424168  180627 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:43:18.424207  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:18.427619  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.428089  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:18.428120  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.428367  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:18.428600  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:18.428769  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:18.428902  180627 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/pause-612343/id_rsa Username:docker}
	I1009 19:43:18.516853  180627 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:43:18.522451  180627 info.go:137] Remote host: Buildroot 2025.02
	I1009 19:43:18.522496  180627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/addons for local assets ...
	I1009 19:43:18.522600  180627 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/files for local assets ...
	I1009 19:43:18.522710  180627 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem -> 1403582.pem in /etc/ssl/certs
	I1009 19:43:18.522875  180627 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:43:18.540133  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem --> /etc/ssl/certs/1403582.pem (1708 bytes)
	I1009 19:43:18.576096  180627 start.go:297] duration metric: took 152.354752ms for postStartSetup
	I1009 19:43:18.576137  180627 fix.go:57] duration metric: took 6.803020946s for fixHost
	I1009 19:43:18.576156  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:18.579823  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.580261  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:18.580289  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.580475  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:18.580696  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:18.580930  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:18.581112  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:18.581289  180627 main.go:141] libmachine: Using SSH client type: native
	I1009 19:43:18.581498  180627 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.72.79 22 <nil> <nil>}
	I1009 19:43:18.581508  180627 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:43:18.698031  180627 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760038998.693419841
	
	I1009 19:43:18.698052  180627 fix.go:217] guest clock: 1760038998.693419841
	I1009 19:43:18.698061  180627 fix.go:230] Guest: 2025-10-09 19:43:18.693419841 +0000 UTC Remote: 2025-10-09 19:43:18.576141033 +0000 UTC m=+25.495188118 (delta=117.278808ms)
	I1009 19:43:18.698092  180627 fix.go:201] guest clock delta is within tolerance: 117.278808ms
	I1009 19:43:18.698100  180627 start.go:84] releasing machines lock for "pause-612343", held for 6.925059543s
	I1009 19:43:18.698126  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:43:18.698418  180627 main.go:141] libmachine: (pause-612343) Calling .GetIP
	I1009 19:43:18.701859  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.702315  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:18.702349  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.702526  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:43:18.703105  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:43:18.703322  180627 main.go:141] libmachine: (pause-612343) Calling .DriverName
	I1009 19:43:18.703431  180627 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:43:18.703480  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:18.703603  180627 ssh_runner.go:195] Run: cat /version.json
	I1009 19:43:18.703631  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHHostname
	I1009 19:43:18.707429  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.707830  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.707869  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:18.707891  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.708076  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:18.708281  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:18.708470  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:18.708532  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:18.708573  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:18.708712  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHPort
	I1009 19:43:18.708727  180627 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/pause-612343/id_rsa Username:docker}
	I1009 19:43:18.708867  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHKeyPath
	I1009 19:43:18.709009  180627 main.go:141] libmachine: (pause-612343) Calling .GetSSHUsername
	I1009 19:43:18.709177  180627 sshutil.go:53] new ssh client: &{IP:192.168.72.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/pause-612343/id_rsa Username:docker}
	I1009 19:43:18.819829  180627 ssh_runner.go:195] Run: systemctl --version
	I1009 19:43:18.828577  180627 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:43:18.987656  180627 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:43:18.998517  180627 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:43:18.998629  180627 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:43:19.011545  180627 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:43:19.011584  180627 start.go:496] detecting cgroup driver to use...
	I1009 19:43:19.011672  180627 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:43:19.033725  180627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:43:19.054444  180627 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:43:19.054512  180627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:43:19.079515  180627 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:43:19.104313  180627 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:43:19.310257  180627 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:43:19.494096  180627 docker.go:234] disabling docker service ...
	I1009 19:43:19.494292  180627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:43:19.530333  180627 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:43:19.546678  180627 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:43:19.738031  180627 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:43:19.941468  180627 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:43:19.961787  180627 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:43:19.989146  180627 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:43:19.989236  180627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:43:20.004424  180627 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:43:20.004510  180627 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:43:20.021917  180627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:43:20.036092  180627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:43:20.050149  180627 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:43:20.065406  180627 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:43:20.084190  180627 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:43:20.101677  180627 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:43:20.115695  180627 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:43:20.127463  180627 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:43:20.143102  180627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:43:20.341185  180627 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:43:23.477082  180627 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.13586143s)
	I1009 19:43:23.477119  180627 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:43:23.477177  180627 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:43:23.485401  180627 start.go:564] Will wait 60s for crictl version
	I1009 19:43:23.485476  180627 ssh_runner.go:195] Run: which crictl
	I1009 19:43:23.490238  180627 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:43:23.538065  180627 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:43:23.538162  180627 ssh_runner.go:195] Run: crio --version
	I1009 19:43:23.580310  180627 ssh_runner.go:195] Run: crio --version
	I1009 19:43:23.624677  180627 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 19:43:23.625815  180627 main.go:141] libmachine: (pause-612343) Calling .GetIP
	I1009 19:43:23.629784  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:23.630301  180627 main.go:141] libmachine: (pause-612343) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:40:b3", ip: ""} in network mk-pause-612343: {Iface:virbr4 ExpiryTime:2025-10-09 20:41:47 +0000 UTC Type:0 Mac:52:54:00:78:40:b3 Iaid: IPaddr:192.168.72.79 Prefix:24 Hostname:pause-612343 Clientid:01:52:54:00:78:40:b3}
	I1009 19:43:23.630339  180627 main.go:141] libmachine: (pause-612343) DBG | domain pause-612343 has defined IP address 192.168.72.79 and MAC address 52:54:00:78:40:b3 in network mk-pause-612343
	I1009 19:43:23.630640  180627 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 19:43:23.637851  180627 kubeadm.go:883] updating cluster {Name:pause-612343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-612343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:43:23.638045  180627 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:43:23.638105  180627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:43:23.702070  180627 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:43:23.702102  180627 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:43:23.702210  180627 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:43:23.753739  180627 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:43:23.753772  180627 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:43:23.753782  180627 kubeadm.go:934] updating node { 192.168.72.79 8443 v1.34.1 crio true true} ...
	I1009 19:43:23.753919  180627 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-612343 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-612343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:43:23.754025  180627 ssh_runner.go:195] Run: crio config
	I1009 19:43:23.832340  180627 cni.go:84] Creating CNI manager for ""
	I1009 19:43:23.832365  180627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 19:43:23.832390  180627 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:43:23.832421  180627 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.79 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-612343 NodeName:pause-612343 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:43:23.832612  180627 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-612343"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.79"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.79"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:43:23.832686  180627 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:43:23.850161  180627 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:43:23.850275  180627 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:43:23.863700  180627 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1009 19:43:23.889912  180627 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:43:23.914778  180627 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1009 19:43:23.940508  180627 ssh_runner.go:195] Run: grep 192.168.72.79	control-plane.minikube.internal$ /etc/hosts
	I1009 19:43:23.946735  180627 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:43:24.135449  180627 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:43:24.155680  180627 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343 for IP: 192.168.72.79
	I1009 19:43:24.155704  180627 certs.go:195] generating shared ca certs ...
	I1009 19:43:24.155720  180627 certs.go:227] acquiring lock for ca certs: {Name:mkad58f6533e9a5aa8b52ac28f20029620803fc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:43:24.155915  180627 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key
	I1009 19:43:24.155985  180627 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key
	I1009 19:43:24.156001  180627 certs.go:257] generating profile certs ...
	I1009 19:43:24.156088  180627 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/client.key
	I1009 19:43:24.156137  180627 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/apiserver.key.4712a38e
	I1009 19:43:24.156172  180627 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/proxy-client.key
	I1009 19:43:24.156283  180627 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/140358.pem (1338 bytes)
	W1009 19:43:24.156320  180627 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-136449/.minikube/certs/140358_empty.pem, impossibly tiny 0 bytes
	I1009 19:43:24.156329  180627 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:43:24.156353  180627 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:43:24.156375  180627 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:43:24.156400  180627 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem (1675 bytes)
	I1009 19:43:24.156446  180627 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem (1708 bytes)
	I1009 19:43:24.157061  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:43:24.192282  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:43:24.226799  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:43:24.383039  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:43:24.489970  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 19:43:24.560084  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:43:24.631209  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:43:24.710267  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/pause-612343/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:43:24.812766  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:43:24.898134  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/140358.pem --> /usr/share/ca-certificates/140358.pem (1338 bytes)
	I1009 19:43:24.970261  180627 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem --> /usr/share/ca-certificates/1403582.pem (1708 bytes)
	I1009 19:43:25.025449  180627 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:43:25.066457  180627 ssh_runner.go:195] Run: openssl version
	I1009 19:43:25.084027  180627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1403582.pem && ln -fs /usr/share/ca-certificates/1403582.pem /etc/ssl/certs/1403582.pem"
	I1009 19:43:25.109917  180627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1403582.pem
	I1009 19:43:25.121373  180627 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:48 /usr/share/ca-certificates/1403582.pem
	I1009 19:43:25.121443  180627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1403582.pem
	I1009 19:43:25.134855  180627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1403582.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:43:25.163072  180627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:43:25.203815  180627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:43:25.222475  180627 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:43:25.222542  180627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:43:25.248974  180627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:43:25.317676  180627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140358.pem && ln -fs /usr/share/ca-certificates/140358.pem /etc/ssl/certs/140358.pem"
	I1009 19:43:25.344198  180627 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140358.pem
	I1009 19:43:25.356129  180627 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:48 /usr/share/ca-certificates/140358.pem
	I1009 19:43:25.356228  180627 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140358.pem
	I1009 19:43:25.375795  180627 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/140358.pem /etc/ssl/certs/51391683.0"
	I1009 19:43:25.401271  180627 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:43:25.422839  180627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:43:25.458346  180627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:43:25.484128  180627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:43:25.509073  180627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:43:25.538210  180627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:43:25.572603  180627 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:43:25.598159  180627 kubeadm.go:400] StartCluster: {Name:pause-612343 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-612343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.79 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:43:25.598329  180627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:43:25.598429  180627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:43:25.709166  180627 cri.go:89] found id: "60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d48875c76b7b7"
	I1009 19:43:25.709198  180627 cri.go:89] found id: "bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29"
	I1009 19:43:25.709205  180627 cri.go:89] found id: "fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9"
	I1009 19:43:25.709216  180627 cri.go:89] found id: "f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf"
	I1009 19:43:25.709220  180627 cri.go:89] found id: "ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880"
	I1009 19:43:25.709225  180627 cri.go:89] found id: "0cdb0ca695027f11308177ae424c24141a277c42ea52f240d89744d7e1654a36"
	I1009 19:43:25.709228  180627 cri.go:89] found id: "380d4e383cd45a00c4215091448fc36b6eb0eae4185a26392943e35088161dfa"
	I1009 19:43:25.709232  180627 cri.go:89] found id: "21c92fa1aaaf9679db031d0a9fbe022abb3aa06c43179cfc6ddd4101f92a7722"
	I1009 19:43:25.709236  180627 cri.go:89] found id: "0e578cbea80e379095323efe2999da810f5a56e84c63d13cc8f02ff48c7f86a2"
	I1009 19:43:25.709263  180627 cri.go:89] found id: ""
	I1009 19:43:25.709323  180627 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-612343 -n pause-612343
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-612343 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-612343 logs -n 25: (1.665541516s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                               ARGS                                                                               │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-980148 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/kubernetes/kubelet.conf                                                                                                             │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /var/lib/kubelet/config.yaml                                                                                                             │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl status docker --all --full --no-pager                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl cat docker --no-pager                                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/docker/daemon.json                                                                                                                  │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo docker system info                                                                                                                           │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl cat cri-docker --no-pager                                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cri-dockerd --version                                                                                                                        │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl status containerd --all --full --no-pager                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl cat containerd --no-pager                                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/containerd/config.toml                                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo containerd config dump                                                                                                                       │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl status crio --all --full --no-pager                                                                                                │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl cat crio --no-pager                                                                                                                │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo crio config                                                                                                                                  │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ delete  │ -p auto-980148                                                                                                                                                   │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ start   │ -p calico-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ calico-980148          │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p kindnet-980148 pgrep -a kubelet                                                                                                                               │ kindnet-980148         │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ start   │ -p cert-expiration-635437 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                              │ cert-expiration-635437 │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:43:58
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:43:58.651095  182549 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:43:58.651390  182549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:58.651396  182549 out.go:374] Setting ErrFile to fd 2...
	I1009 19:43:58.651401  182549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:58.651739  182549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:43:58.652354  182549 out.go:368] Setting JSON to false
	I1009 19:43:58.653647  182549 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8779,"bootTime":1760030260,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:43:58.653764  182549 start.go:143] virtualization: kvm guest
	I1009 19:43:58.655353  182549 out.go:179] * [cert-expiration-635437] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:43:58.656753  182549 notify.go:221] Checking for updates...
	I1009 19:43:58.656802  182549 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:43:58.657961  182549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:43:58.659306  182549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 19:43:58.660473  182549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 19:43:58.661708  182549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:43:58.662854  182549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:43:58.664792  182549 config.go:182] Loaded profile config "cert-expiration-635437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:43:58.665364  182549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:43:58.665445  182549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:43:58.685432  182549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I1009 19:43:58.686080  182549 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:43:58.686917  182549 main.go:141] libmachine: Using API Version  1
	I1009 19:43:58.686936  182549 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:43:58.687271  182549 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:43:58.687473  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:43:58.687839  182549 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:43:58.688297  182549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:43:58.688341  182549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:43:58.703755  182549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I1009 19:43:58.704188  182549 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:43:58.704715  182549 main.go:141] libmachine: Using API Version  1
	I1009 19:43:58.704726  182549 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:43:58.705127  182549 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:43:58.705349  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:43:58.749928  182549 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 19:43:58.750942  182549 start.go:309] selected driver: kvm2
	I1009 19:43:58.750950  182549 start.go:930] validating driver "kvm2" against &{Name:cert-expiration-635437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.1 ClusterName:cert-expiration-635437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:43:58.751045  182549 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:43:58.751875  182549 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:43:58.751984  182549 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:43:58.769069  182549 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:43:58.769102  182549 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:43:58.785134  182549 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:43:58.785526  182549 cni.go:84] Creating CNI manager for ""
	I1009 19:43:58.785587  182549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 19:43:58.785644  182549 start.go:353] cluster config:
	{Name:cert-expiration-635437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-635437 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:43:58.785742  182549 iso.go:125] acquiring lock: {Name:mk98a4af23a55ce5e8a323d2964def6dd3fc61ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:43:58.787513  182549 out.go:179] * Starting "cert-expiration-635437" primary control-plane node in "cert-expiration-635437" cluster
	I1009 19:43:58.067284  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:43:58.068355  182334 main.go:141] libmachine: (calico-980148) DBG | no network interface addresses found for domain calico-980148 (source=lease)
	I1009 19:43:58.068442  182334 main.go:141] libmachine: (calico-980148) DBG | trying to list again with source=arp
	I1009 19:43:58.068771  182334 main.go:141] libmachine: (calico-980148) DBG | unable to find current IP address of domain calico-980148 in network mk-calico-980148 (interfaces detected: [])
	I1009 19:43:58.068961  182334 main.go:141] libmachine: (calico-980148) DBG | I1009 19:43:58.068896  182362 retry.go:31] will retry after 2.744610403s: waiting for domain to come up
	I1009 19:44:00.817256  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:00.818184  182334 main.go:141] libmachine: (calico-980148) DBG | no network interface addresses found for domain calico-980148 (source=lease)
	I1009 19:44:00.818213  182334 main.go:141] libmachine: (calico-980148) DBG | trying to list again with source=arp
	I1009 19:44:00.818618  182334 main.go:141] libmachine: (calico-980148) DBG | unable to find current IP address of domain calico-980148 in network mk-calico-980148 (interfaces detected: [])
	I1009 19:44:00.818686  182334 main.go:141] libmachine: (calico-980148) DBG | I1009 19:44:00.818613  182362 retry.go:31] will retry after 3.634930175s: waiting for domain to come up
	W1009 19:43:59.633526  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	W1009 19:44:02.134731  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	I1009 19:43:58.788613  182549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:43:58.788643  182549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:43:58.788658  182549 cache.go:58] Caching tarball of preloaded images
	I1009 19:43:58.788734  182549 preload.go:233] Found /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:43:58.788740  182549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:43:58.788832  182549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/cert-expiration-635437/config.json ...
	I1009 19:43:58.789109  182549 start.go:361] acquireMachinesLock for cert-expiration-635437: {Name:mkb52a311831bedb463a7965f6666d89b7fa391a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:44:06.830845  182549 start.go:365] duration metric: took 8.041687881s to acquireMachinesLock for "cert-expiration-635437"
	I1009 19:44:06.830901  182549 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:44:06.830907  182549 fix.go:55] fixHost starting: 
	I1009 19:44:06.831370  182549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:44:06.831425  182549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:44:06.850331  182549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I1009 19:44:06.850870  182549 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:44:06.851362  182549 main.go:141] libmachine: Using API Version  1
	I1009 19:44:06.851382  182549 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:44:06.851815  182549 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:44:06.852068  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:44:06.852296  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetState
	I1009 19:44:06.854726  182549 fix.go:113] recreateIfNeeded on cert-expiration-635437: state=Running err=<nil>
	W1009 19:44:06.854757  182549 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:44:04.456020  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.456939  182334 main.go:141] libmachine: (calico-980148) found domain IP: 192.168.50.239
	I1009 19:44:04.456966  182334 main.go:141] libmachine: (calico-980148) reserving static IP address...
	I1009 19:44:04.457010  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has current primary IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.457552  182334 main.go:141] libmachine: (calico-980148) DBG | unable to find host DHCP lease matching {name: "calico-980148", mac: "52:54:00:c8:17:32", ip: "192.168.50.239"} in network mk-calico-980148
	I1009 19:44:04.689486  182334 main.go:141] libmachine: (calico-980148) reserved static IP address 192.168.50.239 for domain calico-980148
	I1009 19:44:04.689513  182334 main.go:141] libmachine: (calico-980148) waiting for SSH...
	I1009 19:44:04.689519  182334 main.go:141] libmachine: (calico-980148) DBG | Getting to WaitForSSH function...
	I1009 19:44:04.692759  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.693144  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:04.693176  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.693350  182334 main.go:141] libmachine: (calico-980148) DBG | Using SSH client type: external
	I1009 19:44:04.693376  182334 main.go:141] libmachine: (calico-980148) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa (-rw-------)
	I1009 19:44:04.693424  182334 main.go:141] libmachine: (calico-980148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:44:04.693436  182334 main.go:141] libmachine: (calico-980148) DBG | About to run SSH command:
	I1009 19:44:04.693459  182334 main.go:141] libmachine: (calico-980148) DBG | exit 0
	I1009 19:44:04.829859  182334 main.go:141] libmachine: (calico-980148) DBG | SSH cmd err, output: <nil>: 
	I1009 19:44:04.830216  182334 main.go:141] libmachine: (calico-980148) domain creation complete
	I1009 19:44:04.830665  182334 main.go:141] libmachine: (calico-980148) Calling .GetConfigRaw
	I1009 19:44:04.831262  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:04.831505  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:04.831697  182334 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:44:04.831710  182334 main.go:141] libmachine: (calico-980148) Calling .GetState
	I1009 19:44:04.833206  182334 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:44:04.833227  182334 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:44:04.833234  182334 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:44:04.833239  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:04.836173  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.836699  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:04.836722  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.836961  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:04.837139  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.837316  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.837439  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:04.837601  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:04.837886  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:04.837900  182334 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:44:04.952723  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:04.952748  182334 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:44:04.952756  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:04.956172  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.956573  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:04.956606  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.956828  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:04.957069  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.957233  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.957400  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:04.957569  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:04.957862  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:04.957879  182334 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:44:05.075477  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 19:44:05.075635  182334 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:44:05.075656  182334 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:44:05.075669  182334 main.go:141] libmachine: (calico-980148) Calling .GetMachineName
	I1009 19:44:05.076000  182334 buildroot.go:166] provisioning hostname "calico-980148"
	I1009 19:44:05.076033  182334 main.go:141] libmachine: (calico-980148) Calling .GetMachineName
	I1009 19:44:05.076286  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.079635  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.080034  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.080066  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.080334  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.080573  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.080785  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.080984  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.081174  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:05.081403  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:05.081422  182334 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-980148 && echo "calico-980148" | sudo tee /etc/hostname
	I1009 19:44:05.220334  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-980148
	
	I1009 19:44:05.220370  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.224136  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.224646  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.224677  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.224952  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.225196  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.225384  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.225540  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.225734  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:05.225966  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:05.225990  182334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-980148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-980148/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-980148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:05.350909  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:05.350937  182334 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-136449/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-136449/.minikube}
	I1009 19:44:05.350960  182334 buildroot.go:174] setting up certificates
	I1009 19:44:05.350974  182334 provision.go:84] configureAuth start
	I1009 19:44:05.350986  182334 main.go:141] libmachine: (calico-980148) Calling .GetMachineName
	I1009 19:44:05.351346  182334 main.go:141] libmachine: (calico-980148) Calling .GetIP
	I1009 19:44:05.354773  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.355220  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.355273  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.355483  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.358080  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.358431  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.358457  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.358680  182334 provision.go:143] copyHostCerts
	I1009 19:44:05.358743  182334 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem, removing ...
	I1009 19:44:05.358761  182334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem
	I1009 19:44:05.358836  182334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem (1082 bytes)
	I1009 19:44:05.358945  182334 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem, removing ...
	I1009 19:44:05.358953  182334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem
	I1009 19:44:05.358983  182334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem (1123 bytes)
	I1009 19:44:05.359064  182334 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem, removing ...
	I1009 19:44:05.359074  182334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem
	I1009 19:44:05.359100  182334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem (1675 bytes)
	I1009 19:44:05.359162  182334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem org=jenkins.calico-980148 san=[127.0.0.1 192.168.50.239 calico-980148 localhost minikube]
	I1009 19:44:05.783584  182334 provision.go:177] copyRemoteCerts
	I1009 19:44:05.783685  182334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:05.783724  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.786850  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.787238  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.787266  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.787519  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.787751  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.787938  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.788091  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:05.876251  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:44:05.911432  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:44:05.945259  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:44:05.979629  182334 provision.go:87] duration metric: took 628.638584ms to configureAuth
	I1009 19:44:05.979659  182334 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:44:05.979901  182334 config.go:182] Loaded profile config "calico-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:05.980004  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.983346  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.983811  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.983841  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.984083  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.984348  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.984521  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.984685  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.984842  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:05.985071  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:05.985097  182334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:06.546985  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:44:06.547009  182334 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:44:06.547017  182334 main.go:141] libmachine: (calico-980148) Calling .GetURL
	I1009 19:44:06.548424  182334 main.go:141] libmachine: (calico-980148) DBG | using libvirt version 8000000
	I1009 19:44:06.551614  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.552083  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.552118  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.552282  182334 main.go:141] libmachine: Docker is up and running!
	I1009 19:44:06.552295  182334 main.go:141] libmachine: Reticulating splines...
	I1009 19:44:06.552302  182334 client.go:171] duration metric: took 19.219582645s to LocalClient.Create
	I1009 19:44:06.552326  182334 start.go:168] duration metric: took 19.219644386s to libmachine.API.Create "calico-980148"
	I1009 19:44:06.552335  182334 start.go:294] postStartSetup for "calico-980148" (driver="kvm2")
	I1009 19:44:06.552348  182334 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:44:06.552368  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.552704  182334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:44:06.552740  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.556512  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.557029  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.557060  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.557271  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.557465  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.557689  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.557864  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:06.651553  182334 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:44:06.657619  182334 info.go:137] Remote host: Buildroot 2025.02
	I1009 19:44:06.657652  182334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/addons for local assets ...
	I1009 19:44:06.657725  182334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/files for local assets ...
	I1009 19:44:06.657884  182334 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem -> 1403582.pem in /etc/ssl/certs
	I1009 19:44:06.658042  182334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:44:06.671899  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem --> /etc/ssl/certs/1403582.pem (1708 bytes)
	I1009 19:44:06.706143  182334 start.go:297] duration metric: took 153.791101ms for postStartSetup
	I1009 19:44:06.706193  182334 main.go:141] libmachine: (calico-980148) Calling .GetConfigRaw
	I1009 19:44:06.706847  182334 main.go:141] libmachine: (calico-980148) Calling .GetIP
	I1009 19:44:06.709704  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.710169  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.710195  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.710586  182334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/config.json ...
	I1009 19:44:06.710883  182334 start.go:129] duration metric: took 19.396107128s to createHost
	I1009 19:44:06.710914  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.713877  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.714290  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.714318  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.714520  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.714751  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.714940  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.715076  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.715195  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:06.715410  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:06.715421  182334 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:44:06.830682  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760039046.784843746
	
	I1009 19:44:06.830707  182334 fix.go:217] guest clock: 1760039046.784843746
	I1009 19:44:06.830717  182334 fix.go:230] Guest: 2025-10-09 19:44:06.784843746 +0000 UTC Remote: 2025-10-09 19:44:06.710900055 +0000 UTC m=+19.526284220 (delta=73.943691ms)
	I1009 19:44:06.830744  182334 fix.go:201] guest clock delta is within tolerance: 73.943691ms
	I1009 19:44:06.830751  182334 start.go:84] releasing machines lock for "calico-980148", held for 19.516048572s
	I1009 19:44:06.830777  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.831066  182334 main.go:141] libmachine: (calico-980148) Calling .GetIP
	I1009 19:44:06.834520  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.835054  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.835090  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.835316  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.835899  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.836108  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.836213  182334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:44:06.836265  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.836323  182334 ssh_runner.go:195] Run: cat /version.json
	I1009 19:44:06.836353  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.839806  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840235  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.840263  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840281  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840547  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.840823  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.840874  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.840945  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840983  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.841126  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.841201  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:06.841327  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.841503  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.841642  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:06.956311  182334 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:06.963824  182334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:44:07.135356  182334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:44:07.143852  182334 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:44:07.143928  182334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:44:07.169602  182334 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:44:07.169631  182334 start.go:496] detecting cgroup driver to use...
	I1009 19:44:07.169700  182334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:44:07.196426  182334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:44:07.217000  182334 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:44:07.217056  182334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	W1009 19:44:04.632760  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	W1009 19:44:06.634063  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	I1009 19:44:06.857105  182549 out.go:252] * Updating the running kvm2 "cert-expiration-635437" VM ...
	I1009 19:44:06.857129  182549 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:06.857145  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:44:06.857380  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:06.860316  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.860870  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:06.860900  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.861129  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:06.861321  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.861464  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.861622  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:06.861871  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:06.862194  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:06.862201  182549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:06.982740  182549 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-635437
	
	I1009 19:44:06.982759  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetMachineName
	I1009 19:44:06.983048  182549 buildroot.go:166] provisioning hostname "cert-expiration-635437"
	I1009 19:44:06.983073  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetMachineName
	I1009 19:44:06.983296  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:06.986966  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.987439  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:06.987475  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.987643  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:06.987836  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.987979  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.988154  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:06.988375  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:06.988650  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:06.988661  182549 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-635437 && echo "cert-expiration-635437" | sudo tee /etc/hostname
	I1009 19:44:07.123701  182549 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-635437
	
	I1009 19:44:07.123717  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.127456  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.127956  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.127982  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.128336  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:07.128527  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.128734  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.128890  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:07.129092  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:07.129293  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:07.129304  182549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-635437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-635437/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-635437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:07.250758  182549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:07.250781  182549 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-136449/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-136449/.minikube}
	I1009 19:44:07.250840  182549 buildroot.go:174] setting up certificates
	I1009 19:44:07.250855  182549 provision.go:84] configureAuth start
	I1009 19:44:07.250867  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetMachineName
	I1009 19:44:07.251236  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetIP
	I1009 19:44:07.254721  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.255191  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.255215  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.255497  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.258842  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.259268  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.259316  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.259714  182549 provision.go:143] copyHostCerts
	I1009 19:44:07.259766  182549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem, removing ...
	I1009 19:44:07.259788  182549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem
	I1009 19:44:07.259850  182549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem (1082 bytes)
	I1009 19:44:07.259969  182549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem, removing ...
	I1009 19:44:07.259974  182549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem
	I1009 19:44:07.260004  182549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem (1123 bytes)
	I1009 19:44:07.260085  182549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem, removing ...
	I1009 19:44:07.260090  182549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem
	I1009 19:44:07.260122  182549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem (1675 bytes)
	I1009 19:44:07.260249  182549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-635437 san=[127.0.0.1 192.168.39.40 cert-expiration-635437 localhost minikube]
	I1009 19:44:07.503263  182549 provision.go:177] copyRemoteCerts
	I1009 19:44:07.503310  182549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:07.503333  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.506879  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.507341  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.507359  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.507603  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:07.507801  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.507966  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:07.508096  182549 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/cert-expiration-635437/id_rsa Username:docker}
	I1009 19:44:07.601586  182549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 19:44:07.638626  182549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:44:07.673647  182549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:44:07.710771  182549 provision.go:87] duration metric: took 459.89931ms to configureAuth
	I1009 19:44:07.710793  182549 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:44:07.710989  182549 config.go:182] Loaded profile config "cert-expiration-635437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:07.711052  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.714281  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.714704  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.714729  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.714970  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:07.715216  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.715430  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.715641  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:07.715840  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:07.716134  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:07.716150  182549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:07.238431  182334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:44:07.261248  182334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:44:07.435878  182334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:44:07.676995  182334 docker.go:234] disabling docker service ...
	I1009 19:44:07.677060  182334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:44:07.695925  182334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:44:07.715016  182334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:44:07.915531  182334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:44:08.066437  182334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:44:08.085584  182334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:44:08.110466  182334 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:44:08.110528  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.125653  182334 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:44:08.125714  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.144778  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.159719  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.173272  182334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:44:08.187887  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.201293  182334 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.225147  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.239022  182334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:44:08.250457  182334 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:44:08.250512  182334 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:44:08.274258  182334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:44:08.289868  182334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:08.446168  182334 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:44:08.570418  182334 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:44:08.570504  182334 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:44:08.577143  182334 start.go:564] Will wait 60s for crictl version
	I1009 19:44:08.577202  182334 ssh_runner.go:195] Run: which crictl
	I1009 19:44:08.581828  182334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:44:08.629024  182334 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:44:08.629128  182334 ssh_runner.go:195] Run: crio --version
	I1009 19:44:08.663662  182334 ssh_runner.go:195] Run: crio --version
	I1009 19:44:08.707203  182334 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 19:44:08.134145  180627 pod_ready.go:94] pod "etcd-pause-612343" is "Ready"
	I1009 19:44:08.134182  180627 pod_ready.go:86] duration metric: took 13.008456235s for pod "etcd-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.137172  180627 pod_ready.go:83] waiting for pod "kube-apiserver-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.143016  180627 pod_ready.go:94] pod "kube-apiserver-pause-612343" is "Ready"
	I1009 19:44:08.143050  180627 pod_ready.go:86] duration metric: took 5.842776ms for pod "kube-apiserver-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.145684  180627 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.150977  180627 pod_ready.go:94] pod "kube-controller-manager-pause-612343" is "Ready"
	I1009 19:44:08.151008  180627 pod_ready.go:86] duration metric: took 5.289678ms for pod "kube-controller-manager-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.155018  180627 pod_ready.go:83] waiting for pod "kube-proxy-szpll" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.330090  180627 pod_ready.go:94] pod "kube-proxy-szpll" is "Ready"
	I1009 19:44:08.330121  180627 pod_ready.go:86] duration metric: took 175.083559ms for pod "kube-proxy-szpll" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.530511  180627 pod_ready.go:83] waiting for pod "kube-scheduler-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:09.331477  180627 pod_ready.go:94] pod "kube-scheduler-pause-612343" is "Ready"
	I1009 19:44:09.331514  180627 pod_ready.go:86] duration metric: took 800.969706ms for pod "kube-scheduler-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:09.331532  180627 pod_ready.go:40] duration metric: took 14.21865808s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:44:09.388623  180627 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1009 19:44:09.390460  180627 out.go:179] * Done! kubectl is now configured to use "pause-612343" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.251969430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039050251934559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e6d718e-97b2-4d66-a347-80f199d08d49 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.252802964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56dfeb9f-3c97-463a-835b-a5bd67b29a42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.253194526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56dfeb9f-3c97-463a-835b-a5bd67b29a42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.254137039Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56dfeb9f-3c97-463a-835b-a5bd67b29a42 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.321423478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16453734-9605-4262-8b22-d6ff1cdd2829 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.321599515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16453734-9605-4262-8b22-d6ff1cdd2829 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.323563009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff1fd908-7865-45c9-b9f6-0705e898357b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.324313798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039050324277326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff1fd908-7865-45c9-b9f6-0705e898357b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.325996270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=559d0a31-9637-441c-9d9d-90a55c75fca1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.326130320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=559d0a31-9637-441c-9d9d-90a55c75fca1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.326504582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=559d0a31-9637-441c-9d9d-90a55c75fca1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.391695393Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7222664d-9606-4e5c-a903-0e6115621d47 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.392481334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7222664d-9606-4e5c-a903-0e6115621d47 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.394713138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc97fba7-8732-4d69-850a-02765d711b7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.395588420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039050395543990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc97fba7-8732-4d69-850a-02765d711b7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.396255230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0905e3bd-f39c-4fe6-904f-06df61af3602 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.396353121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0905e3bd-f39c-4fe6-904f-06df61af3602 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.396808481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0905e3bd-f39c-4fe6-904f-06df61af3602 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.457593958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62cf5dcf-6056-4a81-95b2-5d7b131f3c37 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.457701792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62cf5dcf-6056-4a81-95b2-5d7b131f3c37 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.459961156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5fbf8a3-9847-4cb3-a205-b7608a5929c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.461187278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039050461155166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5fbf8a3-9847-4cb3-a205-b7608a5929c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.462000551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7830fb85-c893-4a03-b4c4-3aed61349c55 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.462068339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7830fb85-c893-4a03-b4c4-3aed61349c55 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:10 pause-612343 crio[2791]: time="2025-10-09 19:44:10.462948224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7830fb85-c893-4a03-b4c4-3aed61349c55 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	76d6212c65d69       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   21 seconds ago       Running             kube-apiserver            2                   7ca202dc68593       kube-apiserver-pause-612343
	0d3e97ac0c8ea       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   21 seconds ago       Running             kube-controller-manager   2                   78a69d5e0bdc6       kube-controller-manager-pause-612343
	62b69d7131b27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   21 seconds ago       Running             kube-scheduler            2                   17a93ddc9fddb       kube-scheduler-pause-612343
	d0e0225596818       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   21 seconds ago       Running             etcd                      2                   0594fd27d0b4f       etcd-pause-612343
	d8e165de2fec8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   44 seconds ago       Running             coredns                   1                   13c6adf6e5f78       coredns-66bc5c9577-pw6gm
	4ffde06198cbe       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   45 seconds ago       Running             kube-proxy                1                   9714a97774960       kube-proxy-szpll
	111526125ab0d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   45 seconds ago       Exited              etcd                      1                   0594fd27d0b4f       etcd-pause-612343
	60b0cc479ec38       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   45 seconds ago       Exited              kube-scheduler            1                   17a93ddc9fddb       kube-scheduler-pause-612343
	bc15b6320615d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   45 seconds ago       Exited              kube-apiserver            1                   7ca202dc68593       kube-apiserver-pause-612343
	fd52e789b1557       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   45 seconds ago       Exited              kube-controller-manager   1                   78a69d5e0bdc6       kube-controller-manager-pause-612343
	f30930d41ed78       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   b2b11da3b8df5       kube-proxy-szpll
	ced85d95dcdef       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   02980fb5e67c2       coredns-66bc5c9577-pw6gm
	
	
	==> coredns [ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48784 - 7518 "HINFO IN 3947369618741164974.8529760864723045159. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028303103s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53937 - 23263 "HINFO IN 2019830926337538257.5552492686097077422. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.159943081s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:45704->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:45712->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:45710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-612343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-612343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=pause-612343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_42_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:42:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-612343
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:44:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.79
	  Hostname:    pause-612343
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c9d770d22646a58886847763ae7dec
	  System UUID:                a6c9d770-d226-46a5-8886-847763ae7dec
	  Boot ID:                    9d5d7f19-16bf-4a05-9263-cdd2617aeed2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pw6gm                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     112s
	  kube-system                 etcd-pause-612343                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         117s
	  kube-system                 kube-apiserver-pause-612343             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-pause-612343    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-szpll                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-pause-612343             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node pause-612343 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node pause-612343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node pause-612343 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node pause-612343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node pause-612343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node pause-612343 status is now: NodeHasSufficientPID
	  Normal  NodeReady                116s                 kubelet          Node pause-612343 status is now: NodeReady
	  Normal  RegisteredNode           113s                 node-controller  Node pause-612343 event: Registered Node pause-612343 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)    kubelet          Node pause-612343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)    kubelet          Node pause-612343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)    kubelet          Node pause-612343 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                  node-controller  Node pause-612343 event: Registered Node pause-612343 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000058] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002079] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.196846] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000028] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108864] kauditd_printk_skb: 74 callbacks suppressed
	[Oct 9 19:42] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.152161] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.205183] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.565405] kauditd_printk_skb: 207 callbacks suppressed
	[ +22.036629] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 9 19:43] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.124929] kauditd_printk_skb: 210 callbacks suppressed
	[  +3.472091] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254] <==
	
	
	==> etcd [d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159] <==
	{"level":"warn","ts":"2025-10-09T19:43:51.587594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.624586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.631444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.651943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.685616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.720963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.741601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.745936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.770233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.790641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.817978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.854028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.866600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.913056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.934390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.956156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.962913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.981853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.994376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.009545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.021944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.037472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.052156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.060519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.114243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55376","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:44:10 up 2 min,  0 users,  load average: 1.30, 0.64, 0.25
	Linux pause-612343 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726] <==
	I1009 19:43:52.812022       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:43:52.812628       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 19:43:52.812718       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:43:52.812822       1 aggregator.go:171] initial CRD sync complete...
	I1009 19:43:52.812830       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:43:52.812835       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:43:52.812839       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:43:52.817157       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:43:52.817179       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:43:52.826362       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1009 19:43:52.831319       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:43:52.837897       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:43:52.861284       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:43:52.861393       1 policy_source.go:240] refreshing policies
	I1009 19:43:52.874177       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:43:52.900523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:43:52.902481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:43:53.722552       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:43:54.593469       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:43:54.634121       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 19:43:54.666360       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:43:54.673947       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:43:56.340472       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:43:56.493291       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:44:00.252839       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29] <==
	W1009 19:43:26.679126       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:26.687348       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 19:43:26.687432       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1009 19:43:26.692997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1009 19:43:26.723100       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:43:26.756144       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1009 19:43:26.757777       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1009 19:43:26.758113       1 instance.go:239] Using reconciler: lease
	W1009 19:43:26.762389       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:26.762627       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:27.680086       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:27.687827       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:27.763316       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:29.300711       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:29.412346       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:29.595234       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:31.718825       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:32.221421       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:32.333262       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:35.783053       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:36.568121       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:37.029649       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:42.385408       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:42.598628       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:43.979292       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff] <==
	I1009 19:43:56.186178       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:43:56.186150       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:43:56.190884       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:43:56.192470       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 19:43:56.194907       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:43:56.194953       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:43:56.204442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:56.204517       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:43:56.204535       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:43:56.210456       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:43:56.211247       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 19:43:56.214783       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:43:56.214795       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:43:56.219199       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:43:56.224846       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:56.225996       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:43:56.232301       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:43:56.236122       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:43:56.236243       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 19:43:56.236346       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:43:56.236297       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:43:56.236311       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:43:56.236328       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:43:56.236337       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:43:56.236277       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-controller-manager [fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9] <==
	I1009 19:43:26.602314       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:43:27.352617       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1009 19:43:27.352657       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:27.354391       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1009 19:43:27.354541       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1009 19:43:27.355133       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1009 19:43:27.355617       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f] <==
	E1009 19:43:52.757317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-612343\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1009 19:43:58.100416       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:43:58.100473       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.79"]
	E1009 19:43:58.100690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:43:58.156695       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1009 19:43:58.156918       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:43:58.157052       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:43:58.169626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:43:58.170001       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:43:58.170032       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:58.175213       1 config.go:200] "Starting service config controller"
	I1009 19:43:58.175367       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:43:58.175431       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:43:58.175848       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:43:58.175915       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:43:58.175921       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:43:58.176283       1 config.go:309] "Starting node config controller"
	I1009 19:43:58.176291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:43:58.176296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:43:58.276517       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:43:58.276562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:43:58.276714       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf] <==
	I1009 19:42:19.820581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:42:19.920863       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:42:19.920934       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.79"]
	E1009 19:42:19.921006       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:42:19.970406       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1009 19:42:19.970479       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:42:19.970525       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:42:19.983228       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:42:19.985405       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:42:19.985447       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:19.992965       1 config.go:200] "Starting service config controller"
	I1009 19:42:19.993027       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:42:19.993057       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:42:19.993071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:42:19.993095       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:42:19.993142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:42:19.997648       1 config.go:309] "Starting node config controller"
	I1009 19:42:19.997682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:42:19.997932       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:42:20.093457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:42:20.093499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:42:20.093538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d48875c76b7b7] <==
	I1009 19:43:27.207974       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491] <==
	I1009 19:43:51.078465       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:43:52.782139       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:43:52.782199       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:43:52.782213       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:43:52.782222       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:43:52.824256       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:43:52.825905       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:52.836812       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:43:52.838268       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:43:52.838305       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:43:52.838339       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:43:52.938412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:43:51 pause-612343 kubelet[3810]: E1009 19:43:51.033340    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:51 pause-612343 kubelet[3810]: E1009 19:43:51.034423    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.041207    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.041449    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.041506    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.783683    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.850172    3810 apiserver.go:52] "Watching apiserver"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.884374    3810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.885681    3810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b-lib-modules\") pod \"kube-proxy-szpll\" (UID: \"cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b\") " pod="kube-system/kube-proxy-szpll"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.885780    3810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b-xtables-lock\") pod \"kube-proxy-szpll\" (UID: \"cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b\") " pod="kube-system/kube-proxy-szpll"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.909848    3810 kubelet_node_status.go:124] "Node was previously registered" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.910212    3810 kubelet_node_status.go:78] "Successfully registered node" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.910320    3810 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.913362    3810 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.973839    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-612343\" already exists" pod="kube-system/kube-apiserver-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.974056    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.992050    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-612343\" already exists" pod="kube-system/kube-controller-manager-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.992076    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-612343"
	Oct 09 19:43:53 pause-612343 kubelet[3810]: E1009 19:43:53.003778    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-612343\" already exists" pod="kube-system/kube-scheduler-pause-612343"
	Oct 09 19:43:53 pause-612343 kubelet[3810]: I1009 19:43:53.005886    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-612343"
	Oct 09 19:43:53 pause-612343 kubelet[3810]: E1009 19:43:53.031548    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-612343\" already exists" pod="kube-system/etcd-pause-612343"
	Oct 09 19:43:59 pause-612343 kubelet[3810]: E1009 19:43:59.012231    3810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760039039010608753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 09 19:43:59 pause-612343 kubelet[3810]: E1009 19:43:59.013100    3810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760039039010608753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 09 19:44:09 pause-612343 kubelet[3810]: E1009 19:44:09.019457    3810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760039049018608435  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 09 19:44:09 pause-612343 kubelet[3810]: E1009 19:44:09.019488    3810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760039049018608435  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-612343 -n pause-612343
helpers_test.go:269: (dbg) Run:  kubectl --context pause-612343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-612343 -n pause-612343
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-612343 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-612343 logs -n 25: (1.589782568s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                               ARGS                                                                               │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-980148 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/kubernetes/kubelet.conf                                                                                                             │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /var/lib/kubelet/config.yaml                                                                                                             │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl status docker --all --full --no-pager                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl cat docker --no-pager                                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/docker/daemon.json                                                                                                                  │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo docker system info                                                                                                                           │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl cat cri-docker --no-pager                                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cri-dockerd --version                                                                                                                        │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl status containerd --all --full --no-pager                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p auto-980148 sudo systemctl cat containerd --no-pager                                                                                                          │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo cat /etc/containerd/config.toml                                                                                                              │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo containerd config dump                                                                                                                       │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl status crio --all --full --no-pager                                                                                                │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo systemctl cat crio --no-pager                                                                                                                │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ ssh     │ -p auto-980148 sudo crio config                                                                                                                                  │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ delete  │ -p auto-980148                                                                                                                                                   │ auto-980148            │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ start   │ -p calico-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ calico-980148          │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	│ ssh     │ -p kindnet-980148 pgrep -a kubelet                                                                                                                               │ kindnet-980148         │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │ 09 Oct 25 19:43 UTC │
	│ start   │ -p cert-expiration-635437 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                              │ cert-expiration-635437 │ jenkins │ v1.37.0 │ 09 Oct 25 19:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:43:58
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:43:58.651095  182549 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:43:58.651390  182549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:58.651396  182549 out.go:374] Setting ErrFile to fd 2...
	I1009 19:43:58.651401  182549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:43:58.651739  182549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:43:58.652354  182549 out.go:368] Setting JSON to false
	I1009 19:43:58.653647  182549 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8779,"bootTime":1760030260,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:43:58.653764  182549 start.go:143] virtualization: kvm guest
	I1009 19:43:58.655353  182549 out.go:179] * [cert-expiration-635437] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:43:58.656753  182549 notify.go:221] Checking for updates...
	I1009 19:43:58.656802  182549 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:43:58.657961  182549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:43:58.659306  182549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 19:43:58.660473  182549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 19:43:58.661708  182549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:43:58.662854  182549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:43:58.664792  182549 config.go:182] Loaded profile config "cert-expiration-635437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:43:58.665364  182549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:43:58.665445  182549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:43:58.685432  182549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I1009 19:43:58.686080  182549 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:43:58.686917  182549 main.go:141] libmachine: Using API Version  1
	I1009 19:43:58.686936  182549 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:43:58.687271  182549 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:43:58.687473  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:43:58.687839  182549 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:43:58.688297  182549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:43:58.688341  182549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:43:58.703755  182549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I1009 19:43:58.704188  182549 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:43:58.704715  182549 main.go:141] libmachine: Using API Version  1
	I1009 19:43:58.704726  182549 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:43:58.705127  182549 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:43:58.705349  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:43:58.749928  182549 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 19:43:58.750942  182549 start.go:309] selected driver: kvm2
	I1009 19:43:58.750950  182549 start.go:930] validating driver "kvm2" against &{Name:cert-expiration-635437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.1 ClusterName:cert-expiration-635437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:43:58.751045  182549 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:43:58.751875  182549 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:43:58.751984  182549 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:43:58.769069  182549 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:43:58.769102  182549 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:43:58.785134  182549 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 19:43:58.785526  182549 cni.go:84] Creating CNI manager for ""
	I1009 19:43:58.785587  182549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 19:43:58.785644  182549 start.go:353] cluster config:
	{Name:cert-expiration-635437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-635437 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:43:58.785742  182549 iso.go:125] acquiring lock: {Name:mk98a4af23a55ce5e8a323d2964def6dd3fc61ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:43:58.787513  182549 out.go:179] * Starting "cert-expiration-635437" primary control-plane node in "cert-expiration-635437" cluster
	I1009 19:43:58.067284  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:43:58.068355  182334 main.go:141] libmachine: (calico-980148) DBG | no network interface addresses found for domain calico-980148 (source=lease)
	I1009 19:43:58.068442  182334 main.go:141] libmachine: (calico-980148) DBG | trying to list again with source=arp
	I1009 19:43:58.068771  182334 main.go:141] libmachine: (calico-980148) DBG | unable to find current IP address of domain calico-980148 in network mk-calico-980148 (interfaces detected: [])
	I1009 19:43:58.068961  182334 main.go:141] libmachine: (calico-980148) DBG | I1009 19:43:58.068896  182362 retry.go:31] will retry after 2.744610403s: waiting for domain to come up
	I1009 19:44:00.817256  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:00.818184  182334 main.go:141] libmachine: (calico-980148) DBG | no network interface addresses found for domain calico-980148 (source=lease)
	I1009 19:44:00.818213  182334 main.go:141] libmachine: (calico-980148) DBG | trying to list again with source=arp
	I1009 19:44:00.818618  182334 main.go:141] libmachine: (calico-980148) DBG | unable to find current IP address of domain calico-980148 in network mk-calico-980148 (interfaces detected: [])
	I1009 19:44:00.818686  182334 main.go:141] libmachine: (calico-980148) DBG | I1009 19:44:00.818613  182362 retry.go:31] will retry after 3.634930175s: waiting for domain to come up
	W1009 19:43:59.633526  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	W1009 19:44:02.134731  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	I1009 19:43:58.788613  182549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:43:58.788643  182549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:43:58.788658  182549 cache.go:58] Caching tarball of preloaded images
	I1009 19:43:58.788734  182549 preload.go:233] Found /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:43:58.788740  182549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:43:58.788832  182549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/cert-expiration-635437/config.json ...
	I1009 19:43:58.789109  182549 start.go:361] acquireMachinesLock for cert-expiration-635437: {Name:mkb52a311831bedb463a7965f6666d89b7fa391a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:44:06.830845  182549 start.go:365] duration metric: took 8.041687881s to acquireMachinesLock for "cert-expiration-635437"
	I1009 19:44:06.830901  182549 start.go:97] Skipping create...Using existing machine configuration
	I1009 19:44:06.830907  182549 fix.go:55] fixHost starting: 
	I1009 19:44:06.831370  182549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:44:06.831425  182549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:44:06.850331  182549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I1009 19:44:06.850870  182549 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:44:06.851362  182549 main.go:141] libmachine: Using API Version  1
	I1009 19:44:06.851382  182549 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:44:06.851815  182549 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:44:06.852068  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:44:06.852296  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetState
	I1009 19:44:06.854726  182549 fix.go:113] recreateIfNeeded on cert-expiration-635437: state=Running err=<nil>
	W1009 19:44:06.854757  182549 fix.go:139] unexpected machine state, will restart: <nil>
	I1009 19:44:04.456020  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.456939  182334 main.go:141] libmachine: (calico-980148) found domain IP: 192.168.50.239
	I1009 19:44:04.456966  182334 main.go:141] libmachine: (calico-980148) reserving static IP address...
	I1009 19:44:04.457010  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has current primary IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.457552  182334 main.go:141] libmachine: (calico-980148) DBG | unable to find host DHCP lease matching {name: "calico-980148", mac: "52:54:00:c8:17:32", ip: "192.168.50.239"} in network mk-calico-980148
	I1009 19:44:04.689486  182334 main.go:141] libmachine: (calico-980148) reserved static IP address 192.168.50.239 for domain calico-980148
	I1009 19:44:04.689513  182334 main.go:141] libmachine: (calico-980148) waiting for SSH...
	I1009 19:44:04.689519  182334 main.go:141] libmachine: (calico-980148) DBG | Getting to WaitForSSH function...
	I1009 19:44:04.692759  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.693144  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:04.693176  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.693350  182334 main.go:141] libmachine: (calico-980148) DBG | Using SSH client type: external
	I1009 19:44:04.693376  182334 main.go:141] libmachine: (calico-980148) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa (-rw-------)
	I1009 19:44:04.693424  182334 main.go:141] libmachine: (calico-980148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:44:04.693436  182334 main.go:141] libmachine: (calico-980148) DBG | About to run SSH command:
	I1009 19:44:04.693459  182334 main.go:141] libmachine: (calico-980148) DBG | exit 0
	I1009 19:44:04.829859  182334 main.go:141] libmachine: (calico-980148) DBG | SSH cmd err, output: <nil>: 
	I1009 19:44:04.830216  182334 main.go:141] libmachine: (calico-980148) domain creation complete
	I1009 19:44:04.830665  182334 main.go:141] libmachine: (calico-980148) Calling .GetConfigRaw
	I1009 19:44:04.831262  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:04.831505  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:04.831697  182334 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:44:04.831710  182334 main.go:141] libmachine: (calico-980148) Calling .GetState
	I1009 19:44:04.833206  182334 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:44:04.833227  182334 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:44:04.833234  182334 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:44:04.833239  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:04.836173  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.836699  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:04.836722  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.836961  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:04.837139  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.837316  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.837439  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:04.837601  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:04.837886  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:04.837900  182334 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:44:04.952723  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:04.952748  182334 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:44:04.952756  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:04.956172  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.956573  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:04.956606  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:04.956828  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:04.957069  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.957233  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:04.957400  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:04.957569  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:04.957862  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:04.957879  182334 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:44:05.075477  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1009 19:44:05.075635  182334 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:44:05.075656  182334 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:44:05.075669  182334 main.go:141] libmachine: (calico-980148) Calling .GetMachineName
	I1009 19:44:05.076000  182334 buildroot.go:166] provisioning hostname "calico-980148"
	I1009 19:44:05.076033  182334 main.go:141] libmachine: (calico-980148) Calling .GetMachineName
	I1009 19:44:05.076286  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.079635  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.080034  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.080066  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.080334  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.080573  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.080785  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.080984  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.081174  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:05.081403  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:05.081422  182334 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-980148 && echo "calico-980148" | sudo tee /etc/hostname
	I1009 19:44:05.220334  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-980148
	
	I1009 19:44:05.220370  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.224136  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.224646  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.224677  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.224952  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.225196  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.225384  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.225540  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.225734  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:05.225966  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:05.225990  182334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-980148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-980148/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-980148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:05.350909  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:05.350937  182334 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-136449/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-136449/.minikube}
	I1009 19:44:05.350960  182334 buildroot.go:174] setting up certificates
	I1009 19:44:05.350974  182334 provision.go:84] configureAuth start
	I1009 19:44:05.350986  182334 main.go:141] libmachine: (calico-980148) Calling .GetMachineName
	I1009 19:44:05.351346  182334 main.go:141] libmachine: (calico-980148) Calling .GetIP
	I1009 19:44:05.354773  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.355220  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.355273  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.355483  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.358080  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.358431  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.358457  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.358680  182334 provision.go:143] copyHostCerts
	I1009 19:44:05.358743  182334 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem, removing ...
	I1009 19:44:05.358761  182334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem
	I1009 19:44:05.358836  182334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem (1082 bytes)
	I1009 19:44:05.358945  182334 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem, removing ...
	I1009 19:44:05.358953  182334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem
	I1009 19:44:05.358983  182334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem (1123 bytes)
	I1009 19:44:05.359064  182334 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem, removing ...
	I1009 19:44:05.359074  182334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem
	I1009 19:44:05.359100  182334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem (1675 bytes)
	I1009 19:44:05.359162  182334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem org=jenkins.calico-980148 san=[127.0.0.1 192.168.50.239 calico-980148 localhost minikube]
	I1009 19:44:05.783584  182334 provision.go:177] copyRemoteCerts
	I1009 19:44:05.783685  182334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:05.783724  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.786850  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.787238  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.787266  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.787519  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.787751  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.787938  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.788091  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:05.876251  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:44:05.911432  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:44:05.945259  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:44:05.979629  182334 provision.go:87] duration metric: took 628.638584ms to configureAuth
	I1009 19:44:05.979659  182334 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:44:05.979901  182334 config.go:182] Loaded profile config "calico-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:05.980004  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:05.983346  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.983811  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:05.983841  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:05.984083  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:05.984348  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.984521  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:05.984685  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:05.984842  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:05.985071  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:05.985097  182334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:06.546985  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:44:06.547009  182334 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:44:06.547017  182334 main.go:141] libmachine: (calico-980148) Calling .GetURL
	I1009 19:44:06.548424  182334 main.go:141] libmachine: (calico-980148) DBG | using libvirt version 8000000
	I1009 19:44:06.551614  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.552083  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.552118  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.552282  182334 main.go:141] libmachine: Docker is up and running!
	I1009 19:44:06.552295  182334 main.go:141] libmachine: Reticulating splines...
	I1009 19:44:06.552302  182334 client.go:171] duration metric: took 19.219582645s to LocalClient.Create
	I1009 19:44:06.552326  182334 start.go:168] duration metric: took 19.219644386s to libmachine.API.Create "calico-980148"
	I1009 19:44:06.552335  182334 start.go:294] postStartSetup for "calico-980148" (driver="kvm2")
	I1009 19:44:06.552348  182334 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:44:06.552368  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.552704  182334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:44:06.552740  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.556512  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.557029  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.557060  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.557271  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.557465  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.557689  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.557864  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:06.651553  182334 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:44:06.657619  182334 info.go:137] Remote host: Buildroot 2025.02
	I1009 19:44:06.657652  182334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/addons for local assets ...
	I1009 19:44:06.657725  182334 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-136449/.minikube/files for local assets ...
	I1009 19:44:06.657884  182334 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem -> 1403582.pem in /etc/ssl/certs
	I1009 19:44:06.658042  182334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:44:06.671899  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/ssl/certs/1403582.pem --> /etc/ssl/certs/1403582.pem (1708 bytes)
	I1009 19:44:06.706143  182334 start.go:297] duration metric: took 153.791101ms for postStartSetup
	I1009 19:44:06.706193  182334 main.go:141] libmachine: (calico-980148) Calling .GetConfigRaw
	I1009 19:44:06.706847  182334 main.go:141] libmachine: (calico-980148) Calling .GetIP
	I1009 19:44:06.709704  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.710169  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.710195  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.710586  182334 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/config.json ...
	I1009 19:44:06.710883  182334 start.go:129] duration metric: took 19.396107128s to createHost
	I1009 19:44:06.710914  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.713877  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.714290  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.714318  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.714520  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.714751  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.714940  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.715076  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.715195  182334 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:06.715410  182334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1009 19:44:06.715421  182334 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:44:06.830682  182334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760039046.784843746
	
	I1009 19:44:06.830707  182334 fix.go:217] guest clock: 1760039046.784843746
	I1009 19:44:06.830717  182334 fix.go:230] Guest: 2025-10-09 19:44:06.784843746 +0000 UTC Remote: 2025-10-09 19:44:06.710900055 +0000 UTC m=+19.526284220 (delta=73.943691ms)
	I1009 19:44:06.830744  182334 fix.go:201] guest clock delta is within tolerance: 73.943691ms
	I1009 19:44:06.830751  182334 start.go:84] releasing machines lock for "calico-980148", held for 19.516048572s
	I1009 19:44:06.830777  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.831066  182334 main.go:141] libmachine: (calico-980148) Calling .GetIP
	I1009 19:44:06.834520  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.835054  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.835090  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.835316  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.835899  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.836108  182334 main.go:141] libmachine: (calico-980148) Calling .DriverName
	I1009 19:44:06.836213  182334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:44:06.836265  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.836323  182334 ssh_runner.go:195] Run: cat /version.json
	I1009 19:44:06.836353  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHHostname
	I1009 19:44:06.839806  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840235  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.840263  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840281  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840547  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.840823  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.840874  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:06.840945  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:06.840983  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.841126  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHPort
	I1009 19:44:06.841201  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:06.841327  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHKeyPath
	I1009 19:44:06.841503  182334 main.go:141] libmachine: (calico-980148) Calling .GetSSHUsername
	I1009 19:44:06.841642  182334 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/calico-980148/id_rsa Username:docker}
	I1009 19:44:06.956311  182334 ssh_runner.go:195] Run: systemctl --version
	I1009 19:44:06.963824  182334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:44:07.135356  182334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:44:07.143852  182334 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:44:07.143928  182334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:44:07.169602  182334 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:44:07.169631  182334 start.go:496] detecting cgroup driver to use...
	I1009 19:44:07.169700  182334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:44:07.196426  182334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:44:07.217000  182334 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:44:07.217056  182334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	W1009 19:44:04.632760  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	W1009 19:44:06.634063  180627 pod_ready.go:104] pod "etcd-pause-612343" is not "Ready", error: <nil>
	I1009 19:44:06.857105  182549 out.go:252] * Updating the running kvm2 "cert-expiration-635437" VM ...
	I1009 19:44:06.857129  182549 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:06.857145  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .DriverName
	I1009 19:44:06.857380  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:06.860316  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.860870  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:06.860900  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.861129  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:06.861321  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.861464  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.861622  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:06.861871  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:06.862194  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:06.862201  182549 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:06.982740  182549 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-635437
	
	I1009 19:44:06.982759  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetMachineName
	I1009 19:44:06.983048  182549 buildroot.go:166] provisioning hostname "cert-expiration-635437"
	I1009 19:44:06.983073  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetMachineName
	I1009 19:44:06.983296  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:06.986966  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.987439  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:06.987475  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:06.987643  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:06.987836  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.987979  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:06.988154  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:06.988375  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:06.988650  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:06.988661  182549 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-635437 && echo "cert-expiration-635437" | sudo tee /etc/hostname
	I1009 19:44:07.123701  182549 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-635437
	
	I1009 19:44:07.123717  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.127456  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.127956  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.127982  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.128336  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:07.128527  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.128734  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.128890  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:07.129092  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:07.129293  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:07.129304  182549 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-635437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-635437/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-635437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:07.250758  182549 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:07.250781  182549 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-136449/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-136449/.minikube}
	I1009 19:44:07.250840  182549 buildroot.go:174] setting up certificates
	I1009 19:44:07.250855  182549 provision.go:84] configureAuth start
	I1009 19:44:07.250867  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetMachineName
	I1009 19:44:07.251236  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetIP
	I1009 19:44:07.254721  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.255191  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.255215  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.255497  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.258842  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.259268  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.259316  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.259714  182549 provision.go:143] copyHostCerts
	I1009 19:44:07.259766  182549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem, removing ...
	I1009 19:44:07.259788  182549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem
	I1009 19:44:07.259850  182549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/ca.pem (1082 bytes)
	I1009 19:44:07.259969  182549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem, removing ...
	I1009 19:44:07.259974  182549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem
	I1009 19:44:07.260004  182549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/cert.pem (1123 bytes)
	I1009 19:44:07.260085  182549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem, removing ...
	I1009 19:44:07.260090  182549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem
	I1009 19:44:07.260122  182549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-136449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-136449/.minikube/key.pem (1675 bytes)
	I1009 19:44:07.260249  182549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-635437 san=[127.0.0.1 192.168.39.40 cert-expiration-635437 localhost minikube]
	I1009 19:44:07.503263  182549 provision.go:177] copyRemoteCerts
	I1009 19:44:07.503310  182549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:07.503333  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.506879  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.507341  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.507359  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.507603  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:07.507801  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.507966  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:07.508096  182549 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/cert-expiration-635437/id_rsa Username:docker}
	I1009 19:44:07.601586  182549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 19:44:07.638626  182549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:44:07.673647  182549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:44:07.710771  182549 provision.go:87] duration metric: took 459.89931ms to configureAuth
	I1009 19:44:07.710793  182549 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:44:07.710989  182549 config.go:182] Loaded profile config "cert-expiration-635437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:44:07.711052  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHHostname
	I1009 19:44:07.714281  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.714704  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:f6:bd", ip: ""} in network mk-cert-expiration-635437: {Iface:virbr1 ExpiryTime:2025-10-09 20:40:32 +0000 UTC Type:0 Mac:52:54:00:e9:f6:bd Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:cert-expiration-635437 Clientid:01:52:54:00:e9:f6:bd}
	I1009 19:44:07.714729  182549 main.go:141] libmachine: (cert-expiration-635437) DBG | domain cert-expiration-635437 has defined IP address 192.168.39.40 and MAC address 52:54:00:e9:f6:bd in network mk-cert-expiration-635437
	I1009 19:44:07.714970  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHPort
	I1009 19:44:07.715216  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.715430  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHKeyPath
	I1009 19:44:07.715641  182549 main.go:141] libmachine: (cert-expiration-635437) Calling .GetSSHUsername
	I1009 19:44:07.715840  182549 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:07.716134  182549 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I1009 19:44:07.716150  182549 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:44:07.238431  182334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:44:07.261248  182334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:44:07.435878  182334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:44:07.676995  182334 docker.go:234] disabling docker service ...
	I1009 19:44:07.677060  182334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:44:07.695925  182334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:44:07.715016  182334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:44:07.915531  182334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:44:08.066437  182334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:44:08.085584  182334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:44:08.110466  182334 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:44:08.110528  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.125653  182334 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:44:08.125714  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.144778  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.159719  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.173272  182334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:44:08.187887  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.201293  182334 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.225147  182334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:44:08.239022  182334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:44:08.250457  182334 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:44:08.250512  182334 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:44:08.274258  182334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:44:08.289868  182334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:44:08.446168  182334 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:44:08.570418  182334 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:44:08.570504  182334 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:44:08.577143  182334 start.go:564] Will wait 60s for crictl version
	I1009 19:44:08.577202  182334 ssh_runner.go:195] Run: which crictl
	I1009 19:44:08.581828  182334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:44:08.629024  182334 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:44:08.629128  182334 ssh_runner.go:195] Run: crio --version
	I1009 19:44:08.663662  182334 ssh_runner.go:195] Run: crio --version
	I1009 19:44:08.707203  182334 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1009 19:44:08.134145  180627 pod_ready.go:94] pod "etcd-pause-612343" is "Ready"
	I1009 19:44:08.134182  180627 pod_ready.go:86] duration metric: took 13.008456235s for pod "etcd-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.137172  180627 pod_ready.go:83] waiting for pod "kube-apiserver-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.143016  180627 pod_ready.go:94] pod "kube-apiserver-pause-612343" is "Ready"
	I1009 19:44:08.143050  180627 pod_ready.go:86] duration metric: took 5.842776ms for pod "kube-apiserver-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.145684  180627 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.150977  180627 pod_ready.go:94] pod "kube-controller-manager-pause-612343" is "Ready"
	I1009 19:44:08.151008  180627 pod_ready.go:86] duration metric: took 5.289678ms for pod "kube-controller-manager-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.155018  180627 pod_ready.go:83] waiting for pod "kube-proxy-szpll" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.330090  180627 pod_ready.go:94] pod "kube-proxy-szpll" is "Ready"
	I1009 19:44:08.330121  180627 pod_ready.go:86] duration metric: took 175.083559ms for pod "kube-proxy-szpll" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:08.530511  180627 pod_ready.go:83] waiting for pod "kube-scheduler-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:09.331477  180627 pod_ready.go:94] pod "kube-scheduler-pause-612343" is "Ready"
	I1009 19:44:09.331514  180627 pod_ready.go:86] duration metric: took 800.969706ms for pod "kube-scheduler-pause-612343" in "kube-system" namespace to be "Ready" or be gone ...
	I1009 19:44:09.331532  180627 pod_ready.go:40] duration metric: took 14.21865808s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1009 19:44:09.388623  180627 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1009 19:44:09.390460  180627 out.go:179] * Done! kubectl is now configured to use "pause-612343" cluster and "default" namespace by default
	I1009 19:44:08.708356  182334 main.go:141] libmachine: (calico-980148) Calling .GetIP
	I1009 19:44:08.711218  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:08.711627  182334 main.go:141] libmachine: (calico-980148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:17:32", ip: ""} in network mk-calico-980148: {Iface:virbr2 ExpiryTime:2025-10-09 20:44:04 +0000 UTC Type:0 Mac:52:54:00:c8:17:32 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:calico-980148 Clientid:01:52:54:00:c8:17:32}
	I1009 19:44:08.711653  182334 main.go:141] libmachine: (calico-980148) DBG | domain calico-980148 has defined IP address 192.168.50.239 and MAC address 52:54:00:c8:17:32 in network mk-calico-980148
	I1009 19:44:08.711960  182334 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 19:44:08.718364  182334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:44:08.736721  182334 kubeadm.go:883] updating cluster {Name:calico-980148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:calico-980148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.239 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:44:08.736833  182334 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:44:08.736879  182334 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:44:08.778447  182334 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1009 19:44:08.778538  182334 ssh_runner.go:195] Run: which lz4
	I1009 19:44:08.783523  182334 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 19:44:08.788687  182334 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 19:44:08.788734  182334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1009 19:44:10.610859  182334 crio.go:462] duration metric: took 1.827378485s to copy over tarball
	I1009 19:44:10.610956  182334 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.623511715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039052623484697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dca1de1d-20d5-445b-bc29-381562b4a00b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.624514159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b299dc9-03f6-4ebf-a4d8-385e3137aef1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.624586623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b299dc9-03f6-4ebf-a4d8-385e3137aef1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.625357399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b299dc9-03f6-4ebf-a4d8-385e3137aef1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.677193559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59cf42eb-5c45-4722-a119-16ec318b8b4a name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.677282448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59cf42eb-5c45-4722-a119-16ec318b8b4a name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.679025945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5389602-6011-421d-8f35-195116eed4da name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.679416652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039052679392953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5389602-6011-421d-8f35-195116eed4da name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.680703422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5db18ed-d528-406c-8b39-e2cccd9dc619 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.680861117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5db18ed-d528-406c-8b39-e2cccd9dc619 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.681151648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5db18ed-d528-406c-8b39-e2cccd9dc619 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.740037662Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=193ed71d-5b7e-485a-9ff0-8d1a750e9d04 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.740134695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=193ed71d-5b7e-485a-9ff0-8d1a750e9d04 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.741599989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22102a24-94fd-4a74-8130-c98e789459bf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.743659356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039052743618164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22102a24-94fd-4a74-8130-c98e789459bf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.745059921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=196930b1-55b5-445b-b684-6f940bb8c7b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.745140087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=196930b1-55b5-445b-b684-6f940bb8c7b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.745437302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=196930b1-55b5-445b-b684-6f940bb8c7b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.795302259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92aa0f00-7248-4558-9326-2810797cbab8 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.795398432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92aa0f00-7248-4558-9326-2810797cbab8 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.797199172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c789f0aa-b2fa-47d1-8882-09ba1ed226fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.797595921Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760039052797576181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c789f0aa-b2fa-47d1-8882-09ba1ed226fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.798321176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61ecdd3b-e86d-4c8a-8257-a9a93920906a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.798368925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61ecdd3b-e86d-4c8a-8257-a9a93920906a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:44:12 pause-612343 crio[2791]: time="2025-10-09 19:44:12.798904384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760039029436903283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d27121d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760039029353113448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760039029355536272,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760039029382891523,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f,PodSandboxId:9714a977749608f4fb43f9b41065e4649ecfd20a17989751ffb0e08dddcb0355,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Sta
te:CONTAINER_RUNNING,CreatedAt:1760039005438370293,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439,PodSandboxId:13c6adf6e5f78d4fb4b2c6a426395a2b0ab557633c0f8dc8ce0113607d34f5d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:17600
39006190559124,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254,PodSandboxId:0594fd27d0b4f42604ab374af794309eab220d8f22d271
21d4fda15aeb33b9fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760039005219351767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 873e1704fe64d3c75f8296e55bac83ec,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d
48875c76b7b7,PodSandboxId:17a93ddc9fddba1321320e3815f53720f0d7734e236d89b9d206be06df47f91f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760039005059850062,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a100e14b573d0be8428e46eeee0da0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29,PodSandboxId:7ca202dc68593e6b1ba4bf9c7c9c6965c1ffd1c017ca5c0776aacd227bcd2527,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760039004901628725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35f51b61eb97291585355e924230ab0,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9,PodSandboxId:78a69d5e0bdc6bc8459eb14da95e673fff39805afbbf56bdda3d39ebf23732b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760039004795956714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-612343,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba43c82d048ad7bb697afcd4d81c4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPo
rt\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf,PodSandboxId:b2b11da3b8df5c6c5014b3e410b8458282dc00b53ec778c0d9307cef47fb6320,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760038939572636972,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-szpll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880,PodSandboxId:02980fb5e67c22c79cbaafb76e23ec2bc002f51fee898378fefbbfd2f8a093c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760038939079369566,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pw6gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56aa3de3-82c9-4b63-9d74-a71586ddf7af,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61ecdd3b-e86d-4c8a-8257-a9a93920906a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	76d6212c65d69       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   23 seconds ago       Running             kube-apiserver            2                   7ca202dc68593       kube-apiserver-pause-612343
	0d3e97ac0c8ea       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   23 seconds ago       Running             kube-controller-manager   2                   78a69d5e0bdc6       kube-controller-manager-pause-612343
	62b69d7131b27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   23 seconds ago       Running             kube-scheduler            2                   17a93ddc9fddb       kube-scheduler-pause-612343
	d0e0225596818       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   23 seconds ago       Running             etcd                      2                   0594fd27d0b4f       etcd-pause-612343
	d8e165de2fec8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   46 seconds ago       Running             coredns                   1                   13c6adf6e5f78       coredns-66bc5c9577-pw6gm
	4ffde06198cbe       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   47 seconds ago       Running             kube-proxy                1                   9714a97774960       kube-proxy-szpll
	111526125ab0d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   47 seconds ago       Exited              etcd                      1                   0594fd27d0b4f       etcd-pause-612343
	60b0cc479ec38       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   47 seconds ago       Exited              kube-scheduler            1                   17a93ddc9fddb       kube-scheduler-pause-612343
	bc15b6320615d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   47 seconds ago       Exited              kube-apiserver            1                   7ca202dc68593       kube-apiserver-pause-612343
	fd52e789b1557       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   48 seconds ago       Exited              kube-controller-manager   1                   78a69d5e0bdc6       kube-controller-manager-pause-612343
	f30930d41ed78       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   b2b11da3b8df5       kube-proxy-szpll
	ced85d95dcdef       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   02980fb5e67c2       coredns-66bc5c9577-pw6gm
	
	
	==> coredns [ced85d95dcdefa422d5d30cf109664a2ed57762475b4d0bbc39cc8459b238880] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48784 - 7518 "HINFO IN 3947369618741164974.8529760864723045159. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028303103s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d8e165de2fec8359c153f78ff140fd904d3fe755b6e5778ead48ad95a2320439] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53937 - 23263 "HINFO IN 2019830926337538257.5552492686097077422. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.159943081s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:45704->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:45712->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:45710->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-612343
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-612343
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=67931d2cc0a2e29153be17ee4f2d502b8a45c9cb
	                    minikube.k8s.io/name=pause-612343
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_09T19_42_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Oct 2025 19:42:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-612343
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Oct 2025 19:44:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Oct 2025 19:43:52 +0000   Thu, 09 Oct 2025 19:42:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.79
	  Hostname:    pause-612343
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c9d770d22646a58886847763ae7dec
	  System UUID:                a6c9d770-d226-46a5-8886-847763ae7dec
	  Boot ID:                    9d5d7f19-16bf-4a05-9263-cdd2617aeed2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pw6gm                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     115s
	  kube-system                 etcd-pause-612343                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m
	  kube-system                 kube-apiserver-pause-612343             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-pause-612343    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-szpll                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-pause-612343             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node pause-612343 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node pause-612343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node pause-612343 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node pause-612343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node pause-612343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node pause-612343 status is now: NodeHasSufficientPID
	  Normal  NodeReady                119s                 kubelet          Node pause-612343 status is now: NodeReady
	  Normal  RegisteredNode           116s                 node-controller  Node pause-612343 event: Registered Node pause-612343 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)    kubelet          Node pause-612343 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)    kubelet          Node pause-612343 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 25s)    kubelet          Node pause-612343 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                  node-controller  Node pause-612343 event: Registered Node pause-612343 in Controller
	
	
	==> dmesg <==
	[Oct 9 19:41] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000058] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002079] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.196846] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000028] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108864] kauditd_printk_skb: 74 callbacks suppressed
	[Oct 9 19:42] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.152161] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.205183] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.565405] kauditd_printk_skb: 207 callbacks suppressed
	[ +22.036629] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 9 19:43] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.124929] kauditd_printk_skb: 210 callbacks suppressed
	[  +3.472091] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [111526125ab0d3e20b8b1d0d02c044f9aefd3b45a90d5b94f641e221ade4c254] <==
	
	
	==> etcd [d0e0225596818974d02a9675feba9c0672a1d62cd0286a165aaf4932d2998159] <==
	{"level":"warn","ts":"2025-10-09T19:43:51.587594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.624586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.631444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.651943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.685616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.720963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.741601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.745936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.770233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.790641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.817978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.854028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.866600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.913056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.934390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.956156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.962913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.981853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:51.994376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.009545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.021944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.037472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.052156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.060519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-09T19:43:52.114243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55376","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:44:13 up 2 min,  0 users,  load average: 1.30, 0.64, 0.25
	Linux pause-612343 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [76d6212c65d695c65f3a9e21b71c17cbad0a2e50175abd306ef5a86f1093a726] <==
	I1009 19:43:52.812022       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1009 19:43:52.812628       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1009 19:43:52.812718       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1009 19:43:52.812822       1 aggregator.go:171] initial CRD sync complete...
	I1009 19:43:52.812830       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 19:43:52.812835       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 19:43:52.812839       1 cache.go:39] Caches are synced for autoregister controller
	I1009 19:43:52.817157       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 19:43:52.817179       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 19:43:52.826362       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1009 19:43:52.831319       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 19:43:52.837897       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1009 19:43:52.861284       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:43:52.861393       1 policy_source.go:240] refreshing policies
	I1009 19:43:52.874177       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1009 19:43:52.900523       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:43:52.902481       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1009 19:43:53.722552       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 19:43:54.593469       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1009 19:43:54.634121       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1009 19:43:54.666360       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:43:54.673947       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:43:56.340472       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:43:56.493291       1 controller.go:667] quota admission added evaluator for: endpoints
	I1009 19:44:00.252839       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [bc15b6320615dd87582440699484b1cd8bfdce38a3e99d4a1ab0286bf7308d29] <==
	W1009 19:43:26.679126       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:26.687348       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1009 19:43:26.687432       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1009 19:43:26.692997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1009 19:43:26.723100       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1009 19:43:26.756144       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1009 19:43:26.757777       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1009 19:43:26.758113       1 instance.go:239] Using reconciler: lease
	W1009 19:43:26.762389       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:26.762627       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:27.680086       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:27.687827       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:27.763316       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:29.300711       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:29.412346       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:29.595234       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:31.718825       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:32.221421       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:32.333262       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:35.783053       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:36.568121       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:37.029649       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:42.385408       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:42.598628       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 19:43:43.979292       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0d3e97ac0c8ea1a4165721f52582915f343d7bbc94e6703171b3bda25a0d26ff] <==
	I1009 19:43:56.186178       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1009 19:43:56.186150       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1009 19:43:56.190884       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1009 19:43:56.192470       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1009 19:43:56.194907       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1009 19:43:56.194953       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1009 19:43:56.204442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:56.204517       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1009 19:43:56.204535       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 19:43:56.210456       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1009 19:43:56.211247       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1009 19:43:56.214783       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1009 19:43:56.214795       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1009 19:43:56.219199       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1009 19:43:56.224846       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1009 19:43:56.225996       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1009 19:43:56.232301       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1009 19:43:56.236122       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1009 19:43:56.236243       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1009 19:43:56.236346       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1009 19:43:56.236297       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1009 19:43:56.236311       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1009 19:43:56.236328       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1009 19:43:56.236337       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1009 19:43:56.236277       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-controller-manager [fd52e789b1557c75034fd366118f56b15157e680cd59cbc92559fd05a9511bc9] <==
	I1009 19:43:26.602314       1 serving.go:386] Generated self-signed cert in-memory
	I1009 19:43:27.352617       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1009 19:43:27.352657       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:27.354391       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1009 19:43:27.354541       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1009 19:43:27.355133       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1009 19:43:27.355617       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [4ffde06198cbe6aad03a9a45a8b1affd409ae58bd86af1a2edf2c36944dda73f] <==
	E1009 19:43:52.757317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-612343\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1009 19:43:58.100416       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:43:58.100473       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.79"]
	E1009 19:43:58.100690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:43:58.156695       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1009 19:43:58.156918       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:43:58.157052       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:43:58.169626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:43:58.170001       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:43:58.170032       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:58.175213       1 config.go:200] "Starting service config controller"
	I1009 19:43:58.175367       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:43:58.175431       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:43:58.175848       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:43:58.175915       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:43:58.175921       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:43:58.176283       1 config.go:309] "Starting node config controller"
	I1009 19:43:58.176291       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:43:58.176296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:43:58.276517       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:43:58.276562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:43:58.276714       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f30930d41ed78ad8cacac4a933b82bacbafa334955df6a526020cdb0bdbd20cf] <==
	I1009 19:42:19.820581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1009 19:42:19.920863       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1009 19:42:19.920934       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.79"]
	E1009 19:42:19.921006       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:42:19.970406       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1009 19:42:19.970479       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:42:19.970525       1 server_linux.go:132] "Using iptables Proxier"
	I1009 19:42:19.983228       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:42:19.985405       1 server.go:527] "Version info" version="v1.34.1"
	I1009 19:42:19.985447       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:42:19.992965       1 config.go:200] "Starting service config controller"
	I1009 19:42:19.993027       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1009 19:42:19.993057       1 config.go:106] "Starting endpoint slice config controller"
	I1009 19:42:19.993071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1009 19:42:19.993095       1 config.go:403] "Starting serviceCIDR config controller"
	I1009 19:42:19.993142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1009 19:42:19.997648       1 config.go:309] "Starting node config controller"
	I1009 19:42:19.997682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1009 19:42:19.997932       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1009 19:42:20.093457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1009 19:42:20.093499       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1009 19:42:20.093538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [60b0cc479ec3852a706fb93935290f72957b4299faa5a1ef242d48875c76b7b7] <==
	I1009 19:43:27.207974       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [62b69d7131b27ae7a7f30b034bfc3ae096bbf3ed81ed3c86310df0b56ddf7491] <==
	I1009 19:43:51.078465       1 serving.go:386] Generated self-signed cert in-memory
	W1009 19:43:52.782139       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:43:52.782199       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:43:52.782213       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:43:52.782222       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:43:52.824256       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1009 19:43:52.825905       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:43:52.836812       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1009 19:43:52.838268       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:43:52.838305       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 19:43:52.838339       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 19:43:52.938412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 09 19:43:51 pause-612343 kubelet[3810]: E1009 19:43:51.033340    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:51 pause-612343 kubelet[3810]: E1009 19:43:51.034423    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.041207    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.041449    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.041506    3810 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-612343\" not found" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.783683    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.850172    3810 apiserver.go:52] "Watching apiserver"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.884374    3810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.885681    3810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b-lib-modules\") pod \"kube-proxy-szpll\" (UID: \"cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b\") " pod="kube-system/kube-proxy-szpll"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.885780    3810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b-xtables-lock\") pod \"kube-proxy-szpll\" (UID: \"cd002ef8-9d92-4b2e-a2cf-3acb2cf28e5b\") " pod="kube-system/kube-proxy-szpll"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.909848    3810 kubelet_node_status.go:124] "Node was previously registered" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.910212    3810 kubelet_node_status.go:78] "Successfully registered node" node="pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.910320    3810 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.913362    3810 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.973839    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-612343\" already exists" pod="kube-system/kube-apiserver-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.974056    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: E1009 19:43:52.992050    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-612343\" already exists" pod="kube-system/kube-controller-manager-pause-612343"
	Oct 09 19:43:52 pause-612343 kubelet[3810]: I1009 19:43:52.992076    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-612343"
	Oct 09 19:43:53 pause-612343 kubelet[3810]: E1009 19:43:53.003778    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-612343\" already exists" pod="kube-system/kube-scheduler-pause-612343"
	Oct 09 19:43:53 pause-612343 kubelet[3810]: I1009 19:43:53.005886    3810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-612343"
	Oct 09 19:43:53 pause-612343 kubelet[3810]: E1009 19:43:53.031548    3810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-612343\" already exists" pod="kube-system/etcd-pause-612343"
	Oct 09 19:43:59 pause-612343 kubelet[3810]: E1009 19:43:59.012231    3810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760039039010608753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 09 19:43:59 pause-612343 kubelet[3810]: E1009 19:43:59.013100    3810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760039039010608753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 09 19:44:09 pause-612343 kubelet[3810]: E1009 19:44:09.019457    3810 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760039049018608435  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 09 19:44:09 pause-612343 kubelet[3810]: E1009 19:44:09.019488    3810 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760039049018608435  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-612343 -n pause-612343
helpers_test.go:269: (dbg) Run:  kubectl --context pause-612343 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (81.46s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.75
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 12.66
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 77.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 207.5
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 11.52
35 TestAddons/parallel/Registry 19.66
36 TestAddons/parallel/RegistryCreds 0.72
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 7.86
41 TestAddons/parallel/CSI 50.01
42 TestAddons/parallel/Headlamp 22.28
43 TestAddons/parallel/CloudSpanner 6
44 TestAddons/parallel/LocalPath 57.2
45 TestAddons/parallel/NvidiaDevicePlugin 6.94
46 TestAddons/parallel/Yakd 11.89
48 TestAddons/StoppedEnableDisable 81.37
49 TestCertOptions 84.2
50 TestCertExpiration 288.12
52 TestForceSystemdFlag 59.61
53 TestForceSystemdEnv 56.81
55 TestKVMDriverInstallOrUpdate 1.14
59 TestErrorSpam/setup 40.71
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.79
62 TestErrorSpam/pause 1.74
63 TestErrorSpam/unpause 1.86
64 TestErrorSpam/stop 4.9
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 89.38
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 30.18
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.79
76 TestFunctional/serial/CacheCmd/cache/add_local 2.34
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 38.86
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.63
87 TestFunctional/serial/LogsFileCmd 1.6
88 TestFunctional/serial/InvalidService 4.18
90 TestFunctional/parallel/ConfigCmd 0.33
91 TestFunctional/parallel/DashboardCmd 31.93
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.98
98 TestFunctional/parallel/ServiceCmdConnect 9.58
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 50.1
102 TestFunctional/parallel/SSHCmd 0.41
103 TestFunctional/parallel/CpCmd 1.39
104 TestFunctional/parallel/MySQL 28.98
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.46
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.4
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.7
117 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
131 TestFunctional/parallel/ImageCommands/ImageBuild 4.24
132 TestFunctional/parallel/ImageCommands/Setup 1.9
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
140 TestFunctional/parallel/ServiceCmd/List 0.46
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
142 TestFunctional/parallel/MountCmd/any-port 29.21
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
144 TestFunctional/parallel/ProfileCmd/profile_list 0.46
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
147 TestFunctional/parallel/ServiceCmd/Format 0.45
148 TestFunctional/parallel/ServiceCmd/URL 0.43
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
152 TestFunctional/parallel/MountCmd/specific-port 1.85
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 241.02
162 TestMultiControlPlane/serial/DeployApp 8.51
163 TestMultiControlPlane/serial/PingHostFromPods 1.23
164 TestMultiControlPlane/serial/AddWorkerNode 48.51
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
167 TestMultiControlPlane/serial/CopyFile 13.32
168 TestMultiControlPlane/serial/StopSecondaryNode 87.91
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
170 TestMultiControlPlane/serial/RestartSecondaryNode 44
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.37
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.97
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
175 TestMultiControlPlane/serial/StopCluster 249.44
176 TestMultiControlPlane/serial/RestartCluster 105.42
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 108.83
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
183 TestJSONOutput/start/Command 88.48
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.95
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 83.97
215 TestMountStart/serial/StartWithMountFirst 23.31
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 24.86
218 TestMountStart/serial/VerifyMountSecond 0.37
219 TestMountStart/serial/DeleteFirst 0.7
220 TestMountStart/serial/VerifyMountPostDelete 0.36
221 TestMountStart/serial/Stop 1.31
222 TestMountStart/serial/RestartStopped 20.71
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 103.74
227 TestMultiNode/serial/DeployApp2Nodes 6.7
228 TestMultiNode/serial/PingHostFrom2Pods 0.82
229 TestMultiNode/serial/AddNode 45.62
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.58
232 TestMultiNode/serial/CopyFile 7.25
233 TestMultiNode/serial/StopNode 2.69
234 TestMultiNode/serial/StartAfterStop 40.47
235 TestMultiNode/serial/RestartKeepsNodes 311.89
236 TestMultiNode/serial/DeleteNode 2.7
237 TestMultiNode/serial/StopMultiNode 168.38
238 TestMultiNode/serial/RestartMultiNode 127.43
239 TestMultiNode/serial/ValidateNameConflict 39.91
246 TestScheduledStopUnix 112.41
250 TestRunningBinaryUpgrade 151.96
252 TestKubernetesUpgrade 177.74
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 81.81
264 TestNetworkPlugins/group/false 3.96
265 TestNoKubernetes/serial/StartWithStopK8s 28.97
269 TestNoKubernetes/serial/Start 47.41
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
271 TestNoKubernetes/serial/ProfileList 0.85
272 TestNoKubernetes/serial/Stop 1.29
273 TestNoKubernetes/serial/StartNoArgs 62.27
274 TestStoppedBinaryUpgrade/Setup 3.02
275 TestStoppedBinaryUpgrade/Upgrade 125.97
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
285 TestPause/serial/Start 115.95
286 TestNetworkPlugins/group/auto/Start 104.95
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
288 TestNetworkPlugins/group/kindnet/Start 59.03
290 TestNetworkPlugins/group/auto/KubeletFlags 0.22
291 TestNetworkPlugins/group/auto/NetCatPod 11.23
292 TestNetworkPlugins/group/auto/DNS 0.17
293 TestNetworkPlugins/group/auto/Localhost 0.15
294 TestNetworkPlugins/group/auto/HairPin 0.14
295 TestNetworkPlugins/group/calico/Start 76.92
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
298 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
299 TestNetworkPlugins/group/kindnet/DNS 0.17
300 TestNetworkPlugins/group/kindnet/Localhost 0.14
301 TestNetworkPlugins/group/kindnet/HairPin 0.17
302 TestNetworkPlugins/group/custom-flannel/Start 79.82
303 TestNetworkPlugins/group/enable-default-cni/Start 73.85
304 TestNetworkPlugins/group/flannel/Start 109.72
305 TestNetworkPlugins/group/calico/ControllerPod 6.01
306 TestNetworkPlugins/group/calico/KubeletFlags 0.26
307 TestNetworkPlugins/group/calico/NetCatPod 12.32
308 TestNetworkPlugins/group/calico/DNS 0.22
309 TestNetworkPlugins/group/calico/Localhost 0.16
310 TestNetworkPlugins/group/calico/HairPin 0.17
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.26
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
315 TestNetworkPlugins/group/bridge/Start 88.12
316 TestNetworkPlugins/group/custom-flannel/DNS 0.17
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.32
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
323 TestStartStop/group/old-k8s-version/serial/FirstStart 99.81
325 TestStartStop/group/no-preload/serial/FirstStart 124.77
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
328 TestNetworkPlugins/group/flannel/NetCatPod 12.3
329 TestNetworkPlugins/group/flannel/DNS 0.2
330 TestNetworkPlugins/group/flannel/Localhost 0.15
331 TestNetworkPlugins/group/flannel/HairPin 0.28
333 TestStartStop/group/embed-certs/serial/FirstStart 95.44
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
335 TestNetworkPlugins/group/bridge/NetCatPod 12.24
336 TestNetworkPlugins/group/bridge/DNS 0.19
337 TestNetworkPlugins/group/bridge/Localhost 0.23
338 TestNetworkPlugins/group/bridge/HairPin 0.16
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.06
341 TestStartStop/group/old-k8s-version/serial/DeployApp 11.34
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
343 TestStartStop/group/old-k8s-version/serial/Stop 89.82
344 TestStartStop/group/no-preload/serial/DeployApp 10.35
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
346 TestStartStop/group/no-preload/serial/Stop 74.01
347 TestStartStop/group/embed-certs/serial/DeployApp 11.29
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
349 TestStartStop/group/embed-certs/serial/Stop 70.7
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.29
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.55
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
354 TestStartStop/group/old-k8s-version/serial/SecondStart 46.15
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/no-preload/serial/SecondStart 92
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
358 TestStartStop/group/embed-certs/serial/SecondStart 52.8
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 21.01
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
362 TestStartStop/group/old-k8s-version/serial/Pause 3.36
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.48
366 TestStartStop/group/newest-cni/serial/FirstStart 67.17
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
370 TestStartStop/group/embed-certs/serial/Pause 3.66
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
374 TestStartStop/group/no-preload/serial/Pause 3.17
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.9
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
381 TestStartStop/group/newest-cni/serial/Stop 2.11
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
383 TestStartStop/group/newest-cni/serial/SecondStart 35.99
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
387 TestStartStop/group/newest-cni/serial/Pause 3.93
x
+
TestDownloadOnly/v1.28.0/json-events (25.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-848873 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-848873 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (25.749597135s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1009 18:39:16.914853  140358 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1009 18:39:16.914979  140358 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-848873
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-848873: exit status 85 (56.807868ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-848873 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-848873 │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:38:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:38:51.204398  140370 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:38:51.204691  140370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:38:51.204701  140370 out.go:374] Setting ErrFile to fd 2...
	I1009 18:38:51.204706  140370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:38:51.204942  140370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	W1009 18:38:51.205088  140370 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21683-136449/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-136449/.minikube/config/config.json: no such file or directory
	I1009 18:38:51.205641  140370 out.go:368] Setting JSON to true
	I1009 18:38:51.206633  140370 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4871,"bootTime":1760030260,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:38:51.206725  140370 start.go:143] virtualization: kvm guest
	I1009 18:38:51.209001  140370 out.go:99] [download-only-848873] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1009 18:38:51.209135  140370 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:38:51.209171  140370 notify.go:221] Checking for updates...
	I1009 18:38:51.210334  140370 out.go:171] MINIKUBE_LOCATION=21683
	I1009 18:38:51.211756  140370 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:38:51.212921  140370 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 18:38:51.214007  140370 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 18:38:51.215059  140370 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:38:51.217166  140370 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:38:51.217417  140370 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:38:51.249296  140370 out.go:99] Using the kvm2 driver based on user configuration
	I1009 18:38:51.249331  140370 start.go:309] selected driver: kvm2
	I1009 18:38:51.249342  140370 start.go:930] validating driver "kvm2" against <nil>
	I1009 18:38:51.249698  140370 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:38:51.249785  140370 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:38:51.262945  140370 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:38:51.262977  140370 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:38:51.275688  140370 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:38:51.275724  140370 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:38:51.276213  140370 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1009 18:38:51.276369  140370 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:38:51.276392  140370 cni.go:84] Creating CNI manager for ""
	I1009 18:38:51.276436  140370 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:38:51.276444  140370 start_flags.go:337] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:38:51.276486  140370 start.go:353] cluster config:
	{Name:download-only-848873 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-848873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:38:51.276678  140370 iso.go:125] acquiring lock: {Name:mk98a4af23a55ce5e8a323d2964def6dd3fc61ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:38:51.278298  140370 out.go:99] Downloading VM boot image ...
	I1009 18:38:51.278349  140370 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21683-136449/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1009 18:39:02.474657  140370 out.go:99] Starting "download-only-848873" primary control-plane node in "download-only-848873" cluster
	I1009 18:39:02.474693  140370 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:39:02.582529  140370 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:39:02.582591  140370 cache.go:58] Caching tarball of preloaded images
	I1009 18:39:02.582807  140370 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:39:02.584820  140370 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1009 18:39:02.584845  140370 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 18:39:02.787437  140370 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1009 18:39:02.787605  140370 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:39:15.278751  140370 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1009 18:39:15.279118  140370 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/download-only-848873/config.json ...
	I1009 18:39:15.279157  140370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/download-only-848873/config.json: {Name:mkddda2f0de2fa16dd2d86932d8b4d4d8f4099bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:39:15.279333  140370 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 18:39:15.279530  140370 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21683-136449/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-848873 host does not exist
	  To start a cluster, run: "minikube start -p download-only-848873"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-848873
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-625858 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-625858 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.657270859s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1009 18:39:29.899239  140358 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 18:39:29.899274  140358 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-625858
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-625858: exit status 85 (58.761704ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-848873 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-848873 │ jenkins │ v1.37.0 │ 09 Oct 25 18:38 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ delete  │ -p download-only-848873                                                                                                                                                                             │ download-only-848873 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │ 09 Oct 25 18:39 UTC │
	│ start   │ -o=json --download-only -p download-only-625858 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-625858 │ jenkins │ v1.37.0 │ 09 Oct 25 18:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:39:17
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:39:17.288078  140625 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:39:17.288354  140625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:17.288364  140625 out.go:374] Setting ErrFile to fd 2...
	I1009 18:39:17.288369  140625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:39:17.288684  140625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 18:39:17.289226  140625 out.go:368] Setting JSON to true
	I1009 18:39:17.290082  140625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4897,"bootTime":1760030260,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:39:17.290177  140625 start.go:143] virtualization: kvm guest
	I1009 18:39:17.291995  140625 out.go:99] [download-only-625858] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:39:17.292151  140625 notify.go:221] Checking for updates...
	I1009 18:39:17.293438  140625 out.go:171] MINIKUBE_LOCATION=21683
	I1009 18:39:17.294685  140625 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:39:17.295819  140625 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 18:39:17.296768  140625 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 18:39:17.297753  140625 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:39:17.299653  140625 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:39:17.299901  140625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:39:17.329270  140625 out.go:99] Using the kvm2 driver based on user configuration
	I1009 18:39:17.329304  140625 start.go:309] selected driver: kvm2
	I1009 18:39:17.329313  140625 start.go:930] validating driver "kvm2" against <nil>
	I1009 18:39:17.329640  140625 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:39:17.329736  140625 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:39:17.342890  140625 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:39:17.342920  140625 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-136449/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:39:17.355930  140625 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1009 18:39:17.355978  140625 start_flags.go:328] no existing cluster config was found, will generate one from the flags 
	I1009 18:39:17.356483  140625 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1009 18:39:17.356643  140625 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:39:17.356668  140625 cni.go:84] Creating CNI manager for ""
	I1009 18:39:17.356712  140625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:39:17.356720  140625 start_flags.go:337] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:39:17.356764  140625 start.go:353] cluster config:
	{Name:download-only-625858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-625858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:39:17.356851  140625 iso.go:125] acquiring lock: {Name:mk98a4af23a55ce5e8a323d2964def6dd3fc61ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:39:17.358213  140625 out.go:99] Starting "download-only-625858" primary control-plane node in "download-only-625858" cluster
	I1009 18:39:17.358241  140625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:39:17.462811  140625 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:39:17.462841  140625 cache.go:58] Caching tarball of preloaded images
	I1009 18:39:17.462997  140625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:39:17.464590  140625 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1009 18:39:17.464612  140625 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 18:39:17.664434  140625 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1009 18:39:17.664480  140625 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21683-136449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-625858 host does not exist
	  To start a cluster, run: "minikube start -p download-only-625858"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-625858
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 18:39:30.475852  140358 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-222043 --alsologtostderr --binary-mirror http://127.0.0.1:39753 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-222043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-222043
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (77.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-185513 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-185513 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.393582114s)
helpers_test.go:175: Cleaning up "offline-crio-185513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-185513
--- PASS: TestOffline (77.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-916037
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-916037: exit status 85 (55.654496ms)

                                                
                                                
-- stdout --
	* Profile "addons-916037" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916037"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-916037
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-916037: exit status 85 (52.655965ms)

                                                
                                                
-- stdout --
	* Profile "addons-916037" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916037"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (207.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-916037 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-916037 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m27.495796579s)
--- PASS: TestAddons/Setup (207.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-916037 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-916037 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-916037 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-916037 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fb5a2901-453c-4cfa-8395-271b98194991] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fb5a2901-453c-4cfa-8395-271b98194991] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003752516s
addons_test.go:694: (dbg) Run:  kubectl --context addons-916037 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-916037 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-916037 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.666384ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-mhqxq" [f05566f3-9afa-47e6-9fc5-7a69a6a0fc84] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007292223s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-d2m77" [1bc08f6d-dc9c-42fd-a0a6-ce0dcf5e0cbb] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003018318s
addons_test.go:392: (dbg) Run:  kubectl --context addons-916037 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-916037 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-916037 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.773819078s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 ip
2025/10/09 18:43:38 [DEBUG] GET http://192.168.39.158:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.66s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.902247ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-916037
addons_test.go:332: (dbg) Run:  kubectl --context addons-916037 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-rhngd" [0bd96665-3b91-4342-96ff-226330707e9c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003818709s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.685479ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-n5phl" [f6523583-4c8f-41f1-93b2-8dc87efbe5d4] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005774456s
addons_test.go:463: (dbg) Run:  kubectl --context addons-916037 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable metrics-server --alsologtostderr -v=1: (1.764551669s)
--- PASS: TestAddons/parallel/MetricsServer (7.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 18:43:26.029988  140358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1009 18:43:26.035763  140358 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 18:43:26.035786  140358 kapi.go:107] duration metric: took 5.812215ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.820744ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-916037 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-916037 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [86ee3d36-c178-4cdb-8dfd-7179addf47b4] Pending
helpers_test.go:352: "task-pv-pod" [86ee3d36-c178-4cdb-8dfd-7179addf47b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [86ee3d36-c178-4cdb-8dfd-7179addf47b4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.003899415s
addons_test.go:572: (dbg) Run:  kubectl --context addons-916037 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-916037 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-916037 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-916037 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-916037 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-916037 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-916037 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b90f13bd-750c-4e52-a09c-b9694b657b1b] Pending
helpers_test.go:352: "task-pv-pod-restore" [b90f13bd-750c-4e52-a09c-b9694b657b1b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b90f13bd-750c-4e52-a09c-b9694b657b1b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003973931s
addons_test.go:614: (dbg) Run:  kubectl --context addons-916037 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-916037 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-916037 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable volumesnapshots --alsologtostderr -v=1: (1.009510641s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.018407247s)
--- PASS: TestAddons/parallel/CSI (50.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-916037 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-c9nzd" [8d7c8d23-ec06-4b9b-9f0f-5260e5ef3213] Pending
helpers_test.go:352: "headlamp-6945c6f4d-c9nzd" [8d7c8d23-ec06-4b9b-9f0f-5260e5ef3213] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-c9nzd" [8d7c8d23-ec06-4b9b-9f0f-5260e5ef3213] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-c9nzd" [8d7c8d23-ec06-4b9b-9f0f-5260e5ef3213] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.004138199s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable headlamp --alsologtostderr -v=1: (6.381796483s)
--- PASS: TestAddons/parallel/Headlamp (22.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-zzjjd" [4cf2c2c5-0450-4f9d-8611-7905e69f2f71] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002927644s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-916037 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-916037 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b02113ce-c5a0-4f29-ad26-903b99182b6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b02113ce-c5a0-4f29-ad26-903b99182b6f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b02113ce-c5a0-4f29-ad26-903b99182b6f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.007991762s
addons_test.go:967: (dbg) Run:  kubectl --context addons-916037 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 ssh "cat /opt/local-path-provisioner/pvc-cf70288e-af26-477d-beee-bb5695fd7609_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-916037 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-916037 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.395960546s)
--- PASS: TestAddons/parallel/LocalPath (57.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qknj6" [fe6c083c-73c5-4674-8b30-d26ef48988f9] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004474298s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.94s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ls9f7" [1c2d2a83-68e0-4063-bc1b-a02d4f673196] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004376399s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-916037 addons disable yakd --alsologtostderr -v=1: (5.880249759s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-916037
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-916037: (1m21.097505815s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-916037
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-916037
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-916037
--- PASS: TestAddons/StoppedEnableDisable (81.37s)

                                                
                                    
x
+
TestCertOptions (84.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-628189 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-628189 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.753564033s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-628189 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-628189 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-628189 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-628189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-628189
--- PASS: TestCertOptions (84.20s)

                                                
                                    
x
+
TestCertExpiration (288.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-635437 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-635437 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.319235497s)
E1009 19:41:15.366753  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-635437 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-635437 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (33.852322246s)
helpers_test.go:175: Cleaning up "cert-expiration-635437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-635437
--- PASS: TestCertExpiration (288.12s)

                                                
                                    
x
+
TestForceSystemdFlag (59.61s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-961192 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-961192 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.411935458s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-961192 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-961192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-961192
--- PASS: TestForceSystemdFlag (59.61s)

                                                
                                    
x
+
TestForceSystemdEnv (56.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-511804 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-511804 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.907922466s)
helpers_test.go:175: Cleaning up "force-systemd-env-511804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-511804
--- PASS: TestForceSystemdEnv (56.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.14s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1009 19:38:43.645871  140358 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 19:38:43.646051  140358 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4165796853/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 19:38:43.674159  140358 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4165796853/001/docker-machine-driver-kvm2 version is 1.1.1
W1009 19:38:43.674199  140358 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1009 19:38:43.674326  140358 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1009 19:38:43.674373  140358 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4165796853/001/docker-machine-driver-kvm2
I1009 19:38:44.644582  140358 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4165796853/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 19:38:44.661099  140358 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4165796853/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.14s)

                                                
                                    
x
+
TestErrorSpam/setup (40.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-527164 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-527164 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:47:59.344475  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:47:59.353233  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:47:59.364650  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:47:59.386107  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:47:59.427541  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:47:59.508993  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:47:59.670521  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:47:59.992288  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:48:00.634390  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:48:01.916073  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:48:04.477598  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:48:09.599612  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-527164 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-527164 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.713168925s)
--- PASS: TestErrorSpam/setup (40.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (4.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 stop: (2.211966046s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 stop: (1.480219457s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 stop
E1009 18:48:19.841008  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-527164 --log_dir /tmp/nospam-527164 stop: (1.207442671s)
--- PASS: TestErrorSpam/stop (4.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-136449/.minikube/files/etc/test/nested/copy/140358/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413212 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:48:40.322389  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:49:21.285362  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-413212 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m29.375446572s)
--- PASS: TestFunctional/serial/StartWithProxy (89.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 18:49:49.722277  140358 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413212 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-413212 --alsologtostderr -v=8: (30.175973537s)
functional_test.go:678: soft start took 30.176755791s for "functional-413212" cluster.
I1009 18:50:19.898639  140358 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (30.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-413212 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 cache add registry.k8s.io/pause:3.1: (1.285405869s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 cache add registry.k8s.io/pause:3.3: (1.2831657s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 cache add registry.k8s.io/pause:latest: (1.220165433s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-413212 /tmp/TestFunctionalserialCacheCmdcacheadd_local1916502863/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cache add minikube-local-cache-test:functional-413212
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 cache add minikube-local-cache-test:functional-413212: (2.002912241s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cache delete minikube-local-cache-test:functional-413212
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-413212
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.26369ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 cache reload: (1.008553924s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 kubectl -- --context functional-413212 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-413212 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413212 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 18:50:43.210534  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-413212 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.864008653s)
functional_test.go:776: restart took 38.864146152s for "functional-413212" cluster.
I1009 18:51:07.363191  140358 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (38.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-413212 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 logs: (1.624566839s)
--- PASS: TestFunctional/serial/LogsCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 logs --file /tmp/TestFunctionalserialLogsFileCmd1197255271/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 logs --file /tmp/TestFunctionalserialLogsFileCmd1197255271/001/logs.txt: (1.59928199s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-413212 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-413212
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-413212: exit status 115 (285.506875ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.166:30915 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-413212 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 config get cpus: exit status 14 (49.800834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 config get cpus: exit status 14 (50.647573ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (31.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-413212 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-413212 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 148774: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (31.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-413212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (150.509166ms)

                                                
                                                
-- stdout --
	* [functional-413212] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:51:27.030485  148343 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:51:27.030742  148343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:51:27.030753  148343 out.go:374] Setting ErrFile to fd 2...
	I1009 18:51:27.030756  148343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:51:27.030974  148343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 18:51:27.031402  148343 out.go:368] Setting JSON to false
	I1009 18:51:27.032426  148343 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5627,"bootTime":1760030260,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:51:27.032532  148343 start.go:143] virtualization: kvm guest
	I1009 18:51:27.034359  148343 out.go:179] * [functional-413212] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:51:27.035632  148343 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 18:51:27.035672  148343 notify.go:221] Checking for updates...
	I1009 18:51:27.037945  148343 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:51:27.039127  148343 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 18:51:27.040305  148343 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 18:51:27.045159  148343 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:51:27.046613  148343 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:51:27.048146  148343 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:51:27.048533  148343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:51:27.048608  148343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:51:27.063654  148343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I1009 18:51:27.064196  148343 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:51:27.064858  148343 main.go:141] libmachine: Using API Version  1
	I1009 18:51:27.064885  148343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:51:27.065282  148343 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:51:27.065486  148343 main.go:141] libmachine: (functional-413212) Calling .DriverName
	I1009 18:51:27.065844  148343 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:51:27.066342  148343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:51:27.066399  148343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:51:27.080961  148343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I1009 18:51:27.081368  148343 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:51:27.081838  148343 main.go:141] libmachine: Using API Version  1
	I1009 18:51:27.081858  148343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:51:27.082244  148343 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:51:27.082443  148343 main.go:141] libmachine: (functional-413212) Calling .DriverName
	I1009 18:51:27.122695  148343 out.go:179] * Using the kvm2 driver based on existing profile
	I1009 18:51:27.123956  148343 start.go:309] selected driver: kvm2
	I1009 18:51:27.123978  148343 start.go:930] validating driver "kvm2" against &{Name:functional-413212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-413212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:51:27.124124  148343 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:51:27.126821  148343 out.go:203] 
	W1009 18:51:27.127899  148343 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 18:51:27.128886  148343 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413212 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-413212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (143.529183ms)

                                                
                                                
-- stdout --
	* [functional-413212] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:51:27.328525  148442 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:51:27.328648  148442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:51:27.328659  148442 out.go:374] Setting ErrFile to fd 2...
	I1009 18:51:27.328665  148442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:51:27.328974  148442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 18:51:27.329427  148442 out.go:368] Setting JSON to false
	I1009 18:51:27.330406  148442 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5627,"bootTime":1760030260,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:51:27.330501  148442 start.go:143] virtualization: kvm guest
	I1009 18:51:27.331778  148442 out.go:179] * [functional-413212] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:51:27.333111  148442 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 18:51:27.333103  148442 notify.go:221] Checking for updates...
	I1009 18:51:27.334235  148442 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:51:27.335260  148442 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 18:51:27.336297  148442 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 18:51:27.337252  148442 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:51:27.338503  148442 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:51:27.343144  148442 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:51:27.343553  148442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:51:27.343608  148442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:51:27.357721  148442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I1009 18:51:27.358307  148442 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:51:27.358846  148442 main.go:141] libmachine: Using API Version  1
	I1009 18:51:27.358874  148442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:51:27.359302  148442 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:51:27.359536  148442 main.go:141] libmachine: (functional-413212) Calling .DriverName
	I1009 18:51:27.359871  148442 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 18:51:27.360341  148442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:51:27.360393  148442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:51:27.380327  148442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
	I1009 18:51:27.380774  148442 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:51:27.381228  148442 main.go:141] libmachine: Using API Version  1
	I1009 18:51:27.381251  148442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:51:27.381582  148442 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:51:27.381765  148442 main.go:141] libmachine: (functional-413212) Calling .DriverName
	I1009 18:51:27.411943  148442 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1009 18:51:27.413148  148442 start.go:309] selected driver: kvm2
	I1009 18:51:27.413162  148442 start.go:930] validating driver "kvm2" against &{Name:functional-413212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-413212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:51:27.413252  148442 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:51:27.415161  148442 out.go:203] 
	W1009 18:51:27.416200  148442 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:51:27.417318  148442 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-413212 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-413212 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-z49q9" [4995e2ea-f9c5-48a5-a8b4-b39639076dd6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-z49q9" [4995e2ea-f9c5-48a5-a8b4-b39639076dd6] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.006114666s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.166:32203
functional_test.go:1680: http://192.168.39.166:32203: success! body:
Request served by hello-node-connect-7d85dfc575-z49q9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.166:32203
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [9c6f4883-e86b-48e1-86a7-9eb1f3a4383f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003802708s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-413212 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-413212 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-413212 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-413212 apply -f testdata/storage-provisioner/pod.yaml
I1009 18:51:21.805680  140358 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ca257d1d-8529-40d5-98a8-ae1af0de0f73] Pending
helpers_test.go:352: "sp-pod" [ca257d1d-8529-40d5-98a8-ae1af0de0f73] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ca257d1d-8529-40d5-98a8-ae1af0de0f73] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004380242s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-413212 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-413212 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-413212 delete -f testdata/storage-provisioner/pod.yaml: (3.179207286s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-413212 apply -f testdata/storage-provisioner/pod.yaml
I1009 18:51:40.380597  140358 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f53a3b6e-3cad-4318-a7ac-fc7578a7884d] Pending
helpers_test.go:352: "sp-pod" [f53a3b6e-3cad-4318-a7ac-fc7578a7884d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f53a3b6e-3cad-4318-a7ac-fc7578a7884d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.00427253s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-413212 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh -n functional-413212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cp functional-413212:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3427169521/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh -n functional-413212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh -n functional-413212 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-413212 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-47l7x" [af5384ec-5299-47ba-8bb0-64e8e40ff5ec] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-47l7x" [af5384ec-5299-47ba-8bb0-64e8e40ff5ec] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.010476752s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413212 exec mysql-5bb876957f-47l7x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-413212 exec mysql-5bb876957f-47l7x -- mysql -ppassword -e "show databases;": exit status 1 (267.736038ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:51:53.366344  140358 retry.go:31] will retry after 1.428548406s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413212 exec mysql-5bb876957f-47l7x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-413212 exec mysql-5bb876957f-47l7x -- mysql -ppassword -e "show databases;": exit status 1 (191.335586ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:51:54.987244  140358 retry.go:31] will retry after 814.585123ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413212 exec mysql-5bb876957f-47l7x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-413212 exec mysql-5bb876957f-47l7x -- mysql -ppassword -e "show databases;": exit status 1 (147.392538ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 18:51:55.950274  140358 retry.go:31] will retry after 2.164498254s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413212 exec mysql-5bb876957f-47l7x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/140358/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo cat /etc/test/nested/copy/140358/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/140358.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo cat /etc/ssl/certs/140358.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/140358.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo cat /usr/share/ca-certificates/140358.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1403582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo cat /etc/ssl/certs/1403582.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1403582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo cat /usr/share/ca-certificates/1403582.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-413212 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh "sudo systemctl is-active docker": exit status 1 (240.377893ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh "sudo systemctl is-active containerd": exit status 1 (277.42515ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-413212 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-413212 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-r4msm" [8dd20254-1003-4c02-abeb-a2712c4c5322] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-r4msm" [8dd20254-1003-4c02-abeb-a2712c4c5322] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.008332436s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413212 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-413212
localhost/kicbase/echo-server:functional-413212
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413212 image ls --format short --alsologtostderr:
I1009 18:51:58.328137  149432 out.go:360] Setting OutFile to fd 1 ...
I1009 18:51:58.328465  149432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.328481  149432 out.go:374] Setting ErrFile to fd 2...
I1009 18:51:58.328487  149432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.328831  149432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
I1009 18:51:58.329705  149432 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.329859  149432 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.330440  149432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.330528  149432 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:58.345148  149432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
I1009 18:51:58.345674  149432 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:58.346246  149432 main.go:141] libmachine: Using API Version  1
I1009 18:51:58.346268  149432 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:58.346772  149432 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:58.346994  149432 main.go:141] libmachine: (functional-413212) Calling .GetState
I1009 18:51:58.349293  149432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.349345  149432 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:58.363002  149432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
I1009 18:51:58.363462  149432 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:58.363930  149432 main.go:141] libmachine: Using API Version  1
I1009 18:51:58.363948  149432 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:58.364278  149432 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:58.364453  149432 main.go:141] libmachine: (functional-413212) Calling .DriverName
I1009 18:51:58.364677  149432 ssh_runner.go:195] Run: systemctl --version
I1009 18:51:58.364709  149432 main.go:141] libmachine: (functional-413212) Calling .GetSSHHostname
I1009 18:51:58.367375  149432 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:58.367910  149432 main.go:141] libmachine: (functional-413212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:84:ba", ip: ""} in network mk-functional-413212: {Iface:virbr1 ExpiryTime:2025-10-09 19:48:36 +0000 UTC Type:0 Mac:52:54:00:8c:84:ba Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-413212 Clientid:01:52:54:00:8c:84:ba}
I1009 18:51:58.367937  149432 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined IP address 192.168.39.166 and MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:58.368115  149432 main.go:141] libmachine: (functional-413212) Calling .GetSSHPort
I1009 18:51:58.368284  149432 main.go:141] libmachine: (functional-413212) Calling .GetSSHKeyPath
I1009 18:51:58.368428  149432 main.go:141] libmachine: (functional-413212) Calling .GetSSHUsername
I1009 18:51:58.368574  149432 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/functional-413212/id_rsa Username:docker}
I1009 18:51:58.464752  149432 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:51:58.555402  149432 main.go:141] libmachine: Making call to close driver server
I1009 18:51:58.555424  149432 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:58.555765  149432 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:58.555784  149432 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:51:58.555793  149432 main.go:141] libmachine: Making call to close driver server
I1009 18:51:58.555801  149432 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:58.556022  149432 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:58.556046  149432 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413212 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-413212  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-413212  │ fbe22ad6fa63d │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413212 image ls --format table --alsologtostderr:
I1009 18:51:59.101175  149559 out.go:360] Setting OutFile to fd 1 ...
I1009 18:51:59.101440  149559 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:59.101449  149559 out.go:374] Setting ErrFile to fd 2...
I1009 18:51:59.101453  149559 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:59.101663  149559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
I1009 18:51:59.102277  149559 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:59.102374  149559 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:59.102823  149559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:59.102887  149559 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:59.118507  149559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
I1009 18:51:59.119012  149559 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:59.119655  149559 main.go:141] libmachine: Using API Version  1
I1009 18:51:59.119689  149559 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:59.120110  149559 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:59.120310  149559 main.go:141] libmachine: (functional-413212) Calling .GetState
I1009 18:51:59.122503  149559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:59.122553  149559 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:59.136885  149559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
I1009 18:51:59.137439  149559 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:59.138091  149559 main.go:141] libmachine: Using API Version  1
I1009 18:51:59.138117  149559 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:59.138468  149559 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:59.138687  149559 main.go:141] libmachine: (functional-413212) Calling .DriverName
I1009 18:51:59.138903  149559 ssh_runner.go:195] Run: systemctl --version
I1009 18:51:59.138936  149559 main.go:141] libmachine: (functional-413212) Calling .GetSSHHostname
I1009 18:51:59.142534  149559 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:59.143133  149559 main.go:141] libmachine: (functional-413212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:84:ba", ip: ""} in network mk-functional-413212: {Iface:virbr1 ExpiryTime:2025-10-09 19:48:36 +0000 UTC Type:0 Mac:52:54:00:8c:84:ba Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-413212 Clientid:01:52:54:00:8c:84:ba}
I1009 18:51:59.143164  149559 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined IP address 192.168.39.166 and MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:59.143349  149559 main.go:141] libmachine: (functional-413212) Calling .GetSSHPort
I1009 18:51:59.143524  149559 main.go:141] libmachine: (functional-413212) Calling .GetSSHKeyPath
I1009 18:51:59.143746  149559 main.go:141] libmachine: (functional-413212) Calling .GetSSHUsername
I1009 18:51:59.143910  149559 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/functional-413212/id_rsa Username:docker}
I1009 18:51:59.237348  149559 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:51:59.320270  149559 main.go:141] libmachine: Making call to close driver server
I1009 18:51:59.320286  149559 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:59.320603  149559 main.go:141] libmachine: (functional-413212) DBG | Closing plugin on server side
I1009 18:51:59.320629  149559 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:59.320645  149559 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:51:59.320663  149559 main.go:141] libmachine: Making call to close driver server
I1009 18:51:59.320677  149559 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:59.320939  149559 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:59.320957  149559 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413212 image ls --format json --alsologtostderr:
[{"id":"fbe22ad6fa63d36f546035ba392865b5334f94f340a5f07fc40ce59b28503eaf","repoDigests":["localhost/minikube-local-cache-test@sha256:27545e4a0cb851e980091333e6c40fde86a48aa44db6de8b5f82aaeef5596f29"],"repoTags":["localhost/minikube-local-cache-test:functional-413212"],"size":"3328"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"52
546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["reg
istry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicba
se/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-413212"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48
ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["g
cr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83
aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413212 image ls --format json --alsologtostderr:
I1009 18:51:58.864921  149504 out.go:360] Setting OutFile to fd 1 ...
I1009 18:51:58.865191  149504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.865202  149504 out.go:374] Setting ErrFile to fd 2...
I1009 18:51:58.865207  149504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.865439  149504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
I1009 18:51:58.866037  149504 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.866155  149504 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.866536  149504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.866616  149504 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:58.879925  149504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
I1009 18:51:58.880444  149504 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:58.880893  149504 main.go:141] libmachine: Using API Version  1
I1009 18:51:58.880917  149504 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:58.881319  149504 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:58.881543  149504 main.go:141] libmachine: (functional-413212) Calling .GetState
I1009 18:51:58.883799  149504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.883849  149504 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:58.897279  149504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
I1009 18:51:58.897715  149504 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:58.898156  149504 main.go:141] libmachine: Using API Version  1
I1009 18:51:58.898176  149504 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:58.898487  149504 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:58.898697  149504 main.go:141] libmachine: (functional-413212) Calling .DriverName
I1009 18:51:58.898903  149504 ssh_runner.go:195] Run: systemctl --version
I1009 18:51:58.898926  149504 main.go:141] libmachine: (functional-413212) Calling .GetSSHHostname
I1009 18:51:58.901777  149504 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:58.902169  149504 main.go:141] libmachine: (functional-413212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:84:ba", ip: ""} in network mk-functional-413212: {Iface:virbr1 ExpiryTime:2025-10-09 19:48:36 +0000 UTC Type:0 Mac:52:54:00:8c:84:ba Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-413212 Clientid:01:52:54:00:8c:84:ba}
I1009 18:51:58.902194  149504 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined IP address 192.168.39.166 and MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:58.902379  149504 main.go:141] libmachine: (functional-413212) Calling .GetSSHPort
I1009 18:51:58.902572  149504 main.go:141] libmachine: (functional-413212) Calling .GetSSHKeyPath
I1009 18:51:58.902714  149504 main.go:141] libmachine: (functional-413212) Calling .GetSSHUsername
I1009 18:51:58.902860  149504 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/functional-413212/id_rsa Username:docker}
I1009 18:51:58.990682  149504 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:51:59.050741  149504 main.go:141] libmachine: Making call to close driver server
I1009 18:51:59.050753  149504 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:59.051053  149504 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:59.051070  149504 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:51:59.051078  149504 main.go:141] libmachine: Making call to close driver server
I1009 18:51:59.051085  149504 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:59.051315  149504 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:59.051332  149504 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:51:59.051343  149504 main.go:141] libmachine: (functional-413212) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413212 image ls --format yaml --alsologtostderr:
- id: fbe22ad6fa63d36f546035ba392865b5334f94f340a5f07fc40ce59b28503eaf
repoDigests:
- localhost/minikube-local-cache-test@sha256:27545e4a0cb851e980091333e6c40fde86a48aa44db6de8b5f82aaeef5596f29
repoTags:
- localhost/minikube-local-cache-test:functional-413212
size: "3328"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-413212
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413212 image ls --format yaml --alsologtostderr:
I1009 18:51:58.619454  149456 out.go:360] Setting OutFile to fd 1 ...
I1009 18:51:58.619774  149456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.619788  149456 out.go:374] Setting ErrFile to fd 2...
I1009 18:51:58.619795  149456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.620146  149456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
I1009 18:51:58.621053  149456 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.621204  149456 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.621863  149456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.622004  149456 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:58.635861  149456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
I1009 18:51:58.636415  149456 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:58.636934  149456 main.go:141] libmachine: Using API Version  1
I1009 18:51:58.636956  149456 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:58.637342  149456 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:58.637533  149456 main.go:141] libmachine: (functional-413212) Calling .GetState
I1009 18:51:58.639896  149456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.639953  149456 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:58.654044  149456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38683
I1009 18:51:58.654533  149456 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:58.655006  149456 main.go:141] libmachine: Using API Version  1
I1009 18:51:58.655023  149456 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:58.655468  149456 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:58.655673  149456 main.go:141] libmachine: (functional-413212) Calling .DriverName
I1009 18:51:58.655912  149456 ssh_runner.go:195] Run: systemctl --version
I1009 18:51:58.655935  149456 main.go:141] libmachine: (functional-413212) Calling .GetSSHHostname
I1009 18:51:58.659380  149456 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:58.659909  149456 main.go:141] libmachine: (functional-413212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:84:ba", ip: ""} in network mk-functional-413212: {Iface:virbr1 ExpiryTime:2025-10-09 19:48:36 +0000 UTC Type:0 Mac:52:54:00:8c:84:ba Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-413212 Clientid:01:52:54:00:8c:84:ba}
I1009 18:51:58.659941  149456 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined IP address 192.168.39.166 and MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:58.660185  149456 main.go:141] libmachine: (functional-413212) Calling .GetSSHPort
I1009 18:51:58.660373  149456 main.go:141] libmachine: (functional-413212) Calling .GetSSHKeyPath
I1009 18:51:58.660577  149456 main.go:141] libmachine: (functional-413212) Calling .GetSSHUsername
I1009 18:51:58.660720  149456 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/functional-413212/id_rsa Username:docker}
I1009 18:51:58.763915  149456 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 18:51:58.809098  149456 main.go:141] libmachine: Making call to close driver server
I1009 18:51:58.809112  149456 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:58.809358  149456 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:58.809376  149456 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:51:58.809385  149456 main.go:141] libmachine: Making call to close driver server
I1009 18:51:58.809393  149456 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:51:58.809658  149456 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:51:58.809677  149456 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh pgrep buildkitd: exit status 1 (205.490474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image build -t localhost/my-image:functional-413212 testdata/build --alsologtostderr
2025/10/09 18:51:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-413212 image build -t localhost/my-image:functional-413212 testdata/build --alsologtostderr: (3.788069192s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413212 image build -t localhost/my-image:functional-413212 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 858ff50d60a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-413212
--> 641592ee78b
Successfully tagged localhost/my-image:functional-413212
641592ee78bafc04228030ee802fe760264859659a1c121d4dd9cb1850672122
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413212 image build -t localhost/my-image:functional-413212 testdata/build --alsologtostderr:
I1009 18:51:58.974380  149534 out.go:360] Setting OutFile to fd 1 ...
I1009 18:51:58.974706  149534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.974717  149534 out.go:374] Setting ErrFile to fd 2...
I1009 18:51:58.974724  149534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:51:58.974984  149534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
I1009 18:51:58.975654  149534 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.976371  149534 config.go:182] Loaded profile config "functional-413212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:51:58.976762  149534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.976800  149534 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:58.990548  149534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
I1009 18:51:58.991087  149534 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:58.991618  149534 main.go:141] libmachine: Using API Version  1
I1009 18:51:58.991646  149534 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:58.992017  149534 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:58.992225  149534 main.go:141] libmachine: (functional-413212) Calling .GetState
I1009 18:51:58.994746  149534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 18:51:58.994808  149534 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 18:51:59.008545  149534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41607
I1009 18:51:59.009217  149534 main.go:141] libmachine: () Calling .GetVersion
I1009 18:51:59.009741  149534 main.go:141] libmachine: Using API Version  1
I1009 18:51:59.009770  149534 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 18:51:59.010201  149534 main.go:141] libmachine: () Calling .GetMachineName
I1009 18:51:59.010437  149534 main.go:141] libmachine: (functional-413212) Calling .DriverName
I1009 18:51:59.010776  149534 ssh_runner.go:195] Run: systemctl --version
I1009 18:51:59.010810  149534 main.go:141] libmachine: (functional-413212) Calling .GetSSHHostname
I1009 18:51:59.014337  149534 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:59.014844  149534 main.go:141] libmachine: (functional-413212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:84:ba", ip: ""} in network mk-functional-413212: {Iface:virbr1 ExpiryTime:2025-10-09 19:48:36 +0000 UTC Type:0 Mac:52:54:00:8c:84:ba Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-413212 Clientid:01:52:54:00:8c:84:ba}
I1009 18:51:59.014876  149534 main.go:141] libmachine: (functional-413212) DBG | domain functional-413212 has defined IP address 192.168.39.166 and MAC address 52:54:00:8c:84:ba in network mk-functional-413212
I1009 18:51:59.015032  149534 main.go:141] libmachine: (functional-413212) Calling .GetSSHPort
I1009 18:51:59.015238  149534 main.go:141] libmachine: (functional-413212) Calling .GetSSHKeyPath
I1009 18:51:59.015412  149534 main.go:141] libmachine: (functional-413212) Calling .GetSSHUsername
I1009 18:51:59.015585  149534 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/functional-413212/id_rsa Username:docker}
I1009 18:51:59.106862  149534 build_images.go:161] Building image from path: /tmp/build.140399866.tar
I1009 18:51:59.106909  149534 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 18:51:59.122766  149534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.140399866.tar
I1009 18:51:59.128613  149534 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.140399866.tar: stat -c "%s %y" /var/lib/minikube/build/build.140399866.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.140399866.tar': No such file or directory
I1009 18:51:59.128666  149534 ssh_runner.go:362] scp /tmp/build.140399866.tar --> /var/lib/minikube/build/build.140399866.tar (3072 bytes)
I1009 18:51:59.169002  149534 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.140399866
I1009 18:51:59.182141  149534 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.140399866 -xf /var/lib/minikube/build/build.140399866.tar
I1009 18:51:59.195072  149534 crio.go:315] Building image: /var/lib/minikube/build/build.140399866
I1009 18:51:59.195171  149534 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-413212 /var/lib/minikube/build/build.140399866 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1009 18:52:02.667591  149534 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-413212 /var/lib/minikube/build/build.140399866 --cgroup-manager=cgroupfs: (3.472368002s)
I1009 18:52:02.667676  149534 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.140399866
I1009 18:52:02.690270  149534 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.140399866.tar
I1009 18:52:02.708914  149534 build_images.go:217] Built localhost/my-image:functional-413212 from /tmp/build.140399866.tar
I1009 18:52:02.708982  149534 build_images.go:133] succeeded building to: functional-413212
I1009 18:52:02.708990  149534 build_images.go:134] failed building to: 
I1009 18:52:02.709028  149534 main.go:141] libmachine: Making call to close driver server
I1009 18:52:02.709044  149534 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:52:02.709372  149534 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:52:02.709390  149534 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 18:52:02.709406  149534 main.go:141] libmachine: Making call to close driver server
I1009 18:52:02.709414  149534 main.go:141] libmachine: (functional-413212) Calling .Close
I1009 18:52:02.709433  149534 main.go:141] libmachine: (functional-413212) DBG | Closing plugin on server side
I1009 18:52:02.709742  149534 main.go:141] libmachine: Successfully made call to close driver server
I1009 18:52:02.709768  149534 main.go:141] libmachine: (functional-413212) DBG | Closing plugin on server side
I1009 18:52:02.709783  149534 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.884214802s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-413212
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image load --daemon kicbase/echo-server:functional-413212 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image load --daemon kicbase/echo-server:functional-413212 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-413212
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image load --daemon kicbase/echo-server:functional-413212 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image save kicbase/echo-server:functional-413212 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image rm kicbase/echo-server:functional-413212 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-413212
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 image save --daemon kicbase/echo-server:functional-413212 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-413212
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (29.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdany-port3375024087/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760035885405395046" to /tmp/TestFunctionalparallelMountCmdany-port3375024087/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760035885405395046" to /tmp/TestFunctionalparallelMountCmdany-port3375024087/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760035885405395046" to /tmp/TestFunctionalparallelMountCmdany-port3375024087/001/test-1760035885405395046
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.143446ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:51:25.703889  140358 retry.go:31] will retry after 602.205468ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 18:51 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 18:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 18:51 test-1760035885405395046
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh cat /mount-9p/test-1760035885405395046
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-413212 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c2568eb7-e754-4f3a-96bf-3fc93d5016ca] Pending
helpers_test.go:352: "busybox-mount" [c2568eb7-e754-4f3a-96bf-3fc93d5016ca] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c2568eb7-e754-4f3a-96bf-3fc93d5016ca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c2568eb7-e754-4f3a-96bf-3fc93d5016ca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 26.013881713s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-413212 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdany-port3375024087/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (29.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 service list -o json
functional_test.go:1504: Took "355.133605ms" to run "out/minikube-linux-amd64 -p functional-413212 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "409.364517ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "49.670008ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.166:31151
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "459.67049ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "53.774166ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.166:31151
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdspecific-port3955962713/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.017627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:51:54.850498  140358 retry.go:31] will retry after 490.157854ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdspecific-port3955962713/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh "sudo umount -f /mount-9p": exit status 1 (251.537318ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-413212 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdspecific-port3955962713/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3887184648/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3887184648/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3887184648/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T" /mount1: exit status 1 (266.11732ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:51:56.727307  140358 retry.go:31] will retry after 606.193112ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413212 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-413212 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3887184648/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3887184648/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3887184648/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-413212
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-413212
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-413212
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (241.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 18:52:59.344694  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:53:27.053473  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4m0.283674974s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (241.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 kubectl -- rollout status deployment/busybox: (6.30830726s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-9wz66 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-b4gbh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-wrd85 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-9wz66 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-b4gbh -- nslookup kubernetes.default
E1009 18:56:15.366701  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:15.373821  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-wrd85 -- nslookup kubernetes.default
E1009 18:56:15.385781  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:15.407599  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:15.449072  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:15.530600  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-9wz66 -- nslookup kubernetes.default.svc.cluster.local
E1009 18:56:15.692137  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-b4gbh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-wrd85 -- nslookup kubernetes.default.svc.cluster.local
E1009 18:56:16.013803  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-9wz66 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-9wz66 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-b4gbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1009 18:56:16.655251  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-b4gbh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-wrd85 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 kubectl -- exec busybox-7b57f96db7-wrd85 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 node add --alsologtostderr -v 5
E1009 18:56:17.937355  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:20.499593  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:25.621511  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:35.863301  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:56:56.344786  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 node add --alsologtostderr -v 5: (47.571710101s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-827655 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp testdata/cp-test.txt ha-827655:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2291036720/001/cp-test_ha-827655.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655:/home/docker/cp-test.txt ha-827655-m02:/home/docker/cp-test_ha-827655_ha-827655-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test_ha-827655_ha-827655-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655:/home/docker/cp-test.txt ha-827655-m03:/home/docker/cp-test_ha-827655_ha-827655-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test_ha-827655_ha-827655-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655:/home/docker/cp-test.txt ha-827655-m04:/home/docker/cp-test_ha-827655_ha-827655-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test_ha-827655_ha-827655-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp testdata/cp-test.txt ha-827655-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2291036720/001/cp-test_ha-827655-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m02:/home/docker/cp-test.txt ha-827655:/home/docker/cp-test_ha-827655-m02_ha-827655.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test_ha-827655-m02_ha-827655.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m02:/home/docker/cp-test.txt ha-827655-m03:/home/docker/cp-test_ha-827655-m02_ha-827655-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test_ha-827655-m02_ha-827655-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m02:/home/docker/cp-test.txt ha-827655-m04:/home/docker/cp-test_ha-827655-m02_ha-827655-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test_ha-827655-m02_ha-827655-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp testdata/cp-test.txt ha-827655-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2291036720/001/cp-test_ha-827655-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m03:/home/docker/cp-test.txt ha-827655:/home/docker/cp-test_ha-827655-m03_ha-827655.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test_ha-827655-m03_ha-827655.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m03:/home/docker/cp-test.txt ha-827655-m02:/home/docker/cp-test_ha-827655-m03_ha-827655-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test_ha-827655-m03_ha-827655-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m03:/home/docker/cp-test.txt ha-827655-m04:/home/docker/cp-test_ha-827655-m03_ha-827655-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test_ha-827655-m03_ha-827655-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp testdata/cp-test.txt ha-827655-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2291036720/001/cp-test_ha-827655-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m04:/home/docker/cp-test.txt ha-827655:/home/docker/cp-test_ha-827655-m04_ha-827655.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655 "sudo cat /home/docker/cp-test_ha-827655-m04_ha-827655.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m04:/home/docker/cp-test.txt ha-827655-m02:/home/docker/cp-test_ha-827655-m04_ha-827655-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m02 "sudo cat /home/docker/cp-test_ha-827655-m04_ha-827655-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 cp ha-827655-m04:/home/docker/cp-test.txt ha-827655-m03:/home/docker/cp-test_ha-827655-m04_ha-827655-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 ssh -n ha-827655-m03 "sudo cat /home/docker/cp-test_ha-827655-m04_ha-827655-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (87.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 node stop m02 --alsologtostderr -v 5
E1009 18:57:37.307792  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:57:59.346842  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 node stop m02 --alsologtostderr -v 5: (1m27.223510784s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5: exit status 7 (681.438491ms)

                                                
                                                
-- stdout --
	ha-827655
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-827655-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-827655-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-827655-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:58:47.465351  154382 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:58:47.465617  154382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:47.465625  154382 out.go:374] Setting ErrFile to fd 2...
	I1009 18:58:47.465629  154382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:58:47.465806  154382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 18:58:47.465975  154382 out.go:368] Setting JSON to false
	I1009 18:58:47.465998  154382 mustload.go:65] Loading cluster: ha-827655
	I1009 18:58:47.466096  154382 notify.go:221] Checking for updates...
	I1009 18:58:47.466438  154382 config.go:182] Loaded profile config "ha-827655": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:58:47.466456  154382 status.go:174] checking status of ha-827655 ...
	I1009 18:58:47.467044  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.467091  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.490306  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35941
	I1009 18:58:47.490821  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.491358  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.491378  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.491834  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.492027  154382 main.go:141] libmachine: (ha-827655) Calling .GetState
	I1009 18:58:47.494071  154382 status.go:371] ha-827655 host status = "Running" (err=<nil>)
	I1009 18:58:47.494087  154382 host.go:66] Checking if "ha-827655" exists ...
	I1009 18:58:47.494362  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.494397  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.508677  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I1009 18:58:47.509142  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.509637  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.509662  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.510094  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.510293  154382 main.go:141] libmachine: (ha-827655) Calling .GetIP
	I1009 18:58:47.513366  154382 main.go:141] libmachine: (ha-827655) DBG | domain ha-827655 has defined MAC address 52:54:00:67:51:40 in network mk-ha-827655
	I1009 18:58:47.513830  154382 main.go:141] libmachine: (ha-827655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:51:40", ip: ""} in network mk-ha-827655: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:22 +0000 UTC Type:0 Mac:52:54:00:67:51:40 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-827655 Clientid:01:52:54:00:67:51:40}
	I1009 18:58:47.513860  154382 main.go:141] libmachine: (ha-827655) DBG | domain ha-827655 has defined IP address 192.168.39.110 and MAC address 52:54:00:67:51:40 in network mk-ha-827655
	I1009 18:58:47.513991  154382 host.go:66] Checking if "ha-827655" exists ...
	I1009 18:58:47.514282  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.514336  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.527215  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35445
	I1009 18:58:47.527608  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.528078  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.528103  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.528506  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.528703  154382 main.go:141] libmachine: (ha-827655) Calling .DriverName
	I1009 18:58:47.528894  154382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:58:47.528924  154382 main.go:141] libmachine: (ha-827655) Calling .GetSSHHostname
	I1009 18:58:47.532159  154382 main.go:141] libmachine: (ha-827655) DBG | domain ha-827655 has defined MAC address 52:54:00:67:51:40 in network mk-ha-827655
	I1009 18:58:47.532686  154382 main.go:141] libmachine: (ha-827655) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:51:40", ip: ""} in network mk-ha-827655: {Iface:virbr1 ExpiryTime:2025-10-09 19:52:22 +0000 UTC Type:0 Mac:52:54:00:67:51:40 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-827655 Clientid:01:52:54:00:67:51:40}
	I1009 18:58:47.532725  154382 main.go:141] libmachine: (ha-827655) DBG | domain ha-827655 has defined IP address 192.168.39.110 and MAC address 52:54:00:67:51:40 in network mk-ha-827655
	I1009 18:58:47.532904  154382 main.go:141] libmachine: (ha-827655) Calling .GetSSHPort
	I1009 18:58:47.533091  154382 main.go:141] libmachine: (ha-827655) Calling .GetSSHKeyPath
	I1009 18:58:47.533246  154382 main.go:141] libmachine: (ha-827655) Calling .GetSSHUsername
	I1009 18:58:47.533412  154382 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/ha-827655/id_rsa Username:docker}
	I1009 18:58:47.617041  154382 ssh_runner.go:195] Run: systemctl --version
	I1009 18:58:47.624119  154382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:58:47.642824  154382 kubeconfig.go:125] found "ha-827655" server: "https://192.168.39.254:8443"
	I1009 18:58:47.642873  154382 api_server.go:166] Checking apiserver status ...
	I1009 18:58:47.642908  154382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:58:47.667146  154382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	W1009 18:58:47.679512  154382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:47.679579  154382 ssh_runner.go:195] Run: ls
	I1009 18:58:47.684971  154382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1009 18:58:47.694076  154382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1009 18:58:47.694105  154382 status.go:463] ha-827655 apiserver status = Running (err=<nil>)
	I1009 18:58:47.694122  154382 status.go:176] ha-827655 status: &{Name:ha-827655 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:58:47.694151  154382 status.go:174] checking status of ha-827655-m02 ...
	I1009 18:58:47.694464  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.694505  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.707673  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40543
	I1009 18:58:47.708168  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.708670  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.708695  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.709004  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.709194  154382 main.go:141] libmachine: (ha-827655-m02) Calling .GetState
	I1009 18:58:47.710986  154382 status.go:371] ha-827655-m02 host status = "Stopped" (err=<nil>)
	I1009 18:58:47.710999  154382 status.go:384] host is not running, skipping remaining checks
	I1009 18:58:47.711005  154382 status.go:176] ha-827655-m02 status: &{Name:ha-827655-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:58:47.711030  154382 status.go:174] checking status of ha-827655-m03 ...
	I1009 18:58:47.711366  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.711512  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.726941  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I1009 18:58:47.727380  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.727869  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.727897  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.728322  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.728534  154382 main.go:141] libmachine: (ha-827655-m03) Calling .GetState
	I1009 18:58:47.730586  154382 status.go:371] ha-827655-m03 host status = "Running" (err=<nil>)
	I1009 18:58:47.730605  154382 host.go:66] Checking if "ha-827655-m03" exists ...
	I1009 18:58:47.731096  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.731148  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.747200  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I1009 18:58:47.747727  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.748197  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.748220  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.748528  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.748688  154382 main.go:141] libmachine: (ha-827655-m03) Calling .GetIP
	I1009 18:58:47.751686  154382 main.go:141] libmachine: (ha-827655-m03) DBG | domain ha-827655-m03 has defined MAC address 52:54:00:74:39:36 in network mk-ha-827655
	I1009 18:58:47.752160  154382 main.go:141] libmachine: (ha-827655-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:39:36", ip: ""} in network mk-ha-827655: {Iface:virbr1 ExpiryTime:2025-10-09 19:54:54 +0000 UTC Type:0 Mac:52:54:00:74:39:36 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-827655-m03 Clientid:01:52:54:00:74:39:36}
	I1009 18:58:47.752187  154382 main.go:141] libmachine: (ha-827655-m03) DBG | domain ha-827655-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:74:39:36 in network mk-ha-827655
	I1009 18:58:47.752356  154382 host.go:66] Checking if "ha-827655-m03" exists ...
	I1009 18:58:47.752712  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.752757  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.766001  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I1009 18:58:47.766488  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.766945  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.766981  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.767357  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.767601  154382 main.go:141] libmachine: (ha-827655-m03) Calling .DriverName
	I1009 18:58:47.767794  154382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:58:47.767819  154382 main.go:141] libmachine: (ha-827655-m03) Calling .GetSSHHostname
	I1009 18:58:47.770605  154382 main.go:141] libmachine: (ha-827655-m03) DBG | domain ha-827655-m03 has defined MAC address 52:54:00:74:39:36 in network mk-ha-827655
	I1009 18:58:47.771029  154382 main.go:141] libmachine: (ha-827655-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:39:36", ip: ""} in network mk-ha-827655: {Iface:virbr1 ExpiryTime:2025-10-09 19:54:54 +0000 UTC Type:0 Mac:52:54:00:74:39:36 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-827655-m03 Clientid:01:52:54:00:74:39:36}
	I1009 18:58:47.771051  154382 main.go:141] libmachine: (ha-827655-m03) DBG | domain ha-827655-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:74:39:36 in network mk-ha-827655
	I1009 18:58:47.771246  154382 main.go:141] libmachine: (ha-827655-m03) Calling .GetSSHPort
	I1009 18:58:47.771417  154382 main.go:141] libmachine: (ha-827655-m03) Calling .GetSSHKeyPath
	I1009 18:58:47.771569  154382 main.go:141] libmachine: (ha-827655-m03) Calling .GetSSHUsername
	I1009 18:58:47.771712  154382 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/ha-827655-m03/id_rsa Username:docker}
	I1009 18:58:47.864317  154382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:58:47.885494  154382 kubeconfig.go:125] found "ha-827655" server: "https://192.168.39.254:8443"
	I1009 18:58:47.885536  154382 api_server.go:166] Checking apiserver status ...
	I1009 18:58:47.885610  154382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:58:47.907344  154382 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1742/cgroup
	W1009 18:58:47.918313  154382 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1742/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:58:47.918390  154382 ssh_runner.go:195] Run: ls
	I1009 18:58:47.923486  154382 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1009 18:58:47.929424  154382 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1009 18:58:47.929448  154382 status.go:463] ha-827655-m03 apiserver status = Running (err=<nil>)
	I1009 18:58:47.929458  154382 status.go:176] ha-827655-m03 status: &{Name:ha-827655-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 18:58:47.929501  154382 status.go:174] checking status of ha-827655-m04 ...
	I1009 18:58:47.929812  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.929857  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.942924  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I1009 18:58:47.943366  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.943895  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.943924  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.944243  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.944441  154382 main.go:141] libmachine: (ha-827655-m04) Calling .GetState
	I1009 18:58:47.946152  154382 status.go:371] ha-827655-m04 host status = "Running" (err=<nil>)
	I1009 18:58:47.946170  154382 host.go:66] Checking if "ha-827655-m04" exists ...
	I1009 18:58:47.946456  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.946488  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.960053  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35347
	I1009 18:58:47.960481  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.960845  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.960866  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.961205  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.961397  154382 main.go:141] libmachine: (ha-827655-m04) Calling .GetIP
	I1009 18:58:47.964188  154382 main.go:141] libmachine: (ha-827655-m04) DBG | domain ha-827655-m04 has defined MAC address 52:54:00:44:cd:a2 in network mk-ha-827655
	I1009 18:58:47.964637  154382 main.go:141] libmachine: (ha-827655-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:cd:a2", ip: ""} in network mk-ha-827655: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:34 +0000 UTC Type:0 Mac:52:54:00:44:cd:a2 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-827655-m04 Clientid:01:52:54:00:44:cd:a2}
	I1009 18:58:47.964665  154382 main.go:141] libmachine: (ha-827655-m04) DBG | domain ha-827655-m04 has defined IP address 192.168.39.113 and MAC address 52:54:00:44:cd:a2 in network mk-ha-827655
	I1009 18:58:47.964867  154382 host.go:66] Checking if "ha-827655-m04" exists ...
	I1009 18:58:47.965189  154382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:58:47.965258  154382 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:58:47.978696  154382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42737
	I1009 18:58:47.979070  154382 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:58:47.979477  154382 main.go:141] libmachine: Using API Version  1
	I1009 18:58:47.979500  154382 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:58:47.979812  154382 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:58:47.979993  154382 main.go:141] libmachine: (ha-827655-m04) Calling .DriverName
	I1009 18:58:47.980154  154382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:58:47.980172  154382 main.go:141] libmachine: (ha-827655-m04) Calling .GetSSHHostname
	I1009 18:58:47.982692  154382 main.go:141] libmachine: (ha-827655-m04) DBG | domain ha-827655-m04 has defined MAC address 52:54:00:44:cd:a2 in network mk-ha-827655
	I1009 18:58:47.983156  154382 main.go:141] libmachine: (ha-827655-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:cd:a2", ip: ""} in network mk-ha-827655: {Iface:virbr1 ExpiryTime:2025-10-09 19:56:34 +0000 UTC Type:0 Mac:52:54:00:44:cd:a2 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:ha-827655-m04 Clientid:01:52:54:00:44:cd:a2}
	I1009 18:58:47.983199  154382 main.go:141] libmachine: (ha-827655-m04) DBG | domain ha-827655-m04 has defined IP address 192.168.39.113 and MAC address 52:54:00:44:cd:a2 in network mk-ha-827655
	I1009 18:58:47.983428  154382 main.go:141] libmachine: (ha-827655-m04) Calling .GetSSHPort
	I1009 18:58:47.983630  154382 main.go:141] libmachine: (ha-827655-m04) Calling .GetSSHKeyPath
	I1009 18:58:47.983767  154382 main.go:141] libmachine: (ha-827655-m04) Calling .GetSSHUsername
	I1009 18:58:47.983917  154382 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/ha-827655-m04/id_rsa Username:docker}
	I1009 18:58:48.073119  154382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:58:48.095689  154382 status.go:176] ha-827655-m04 status: &{Name:ha-827655-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (87.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 node start m02 --alsologtostderr -v 5
E1009 18:58:59.232691  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 node start m02 --alsologtostderr -v 5: (42.980965544s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.026738477s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 stop --alsologtostderr -v 5
E1009 19:01:15.368753  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:01:43.074615  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:02:59.347033  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 stop --alsologtostderr -v 5: (4m9.311683455s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 start --wait true --alsologtostderr -v 5
E1009 19:04:22.415468  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 start --wait true --alsologtostderr -v 5: (2m9.938973372s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 node delete m03 --alsologtostderr -v 5: (18.160141833s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (249.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 stop --alsologtostderr -v 5
E1009 19:06:15.366184  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:07:59.346374  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 stop --alsologtostderr -v 5: (4m9.329158318s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5: exit status 7 (112.474423ms)

                                                
                                                
-- stdout --
	ha-827655
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-827655-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-827655-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:10:22.193223  158337 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:10:22.193334  158337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:22.193342  158337 out.go:374] Setting ErrFile to fd 2...
	I1009 19:10:22.193345  158337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:22.193533  158337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:10:22.193728  158337 out.go:368] Setting JSON to false
	I1009 19:10:22.193753  158337 mustload.go:65] Loading cluster: ha-827655
	I1009 19:10:22.193831  158337 notify.go:221] Checking for updates...
	I1009 19:10:22.194144  158337 config.go:182] Loaded profile config "ha-827655": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:10:22.194159  158337 status.go:174] checking status of ha-827655 ...
	I1009 19:10:22.194578  158337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:22.194618  158337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:22.218724  158337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I1009 19:10:22.219345  158337 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:22.220146  158337 main.go:141] libmachine: Using API Version  1
	I1009 19:10:22.220190  158337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:22.220587  158337 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:22.220782  158337 main.go:141] libmachine: (ha-827655) Calling .GetState
	I1009 19:10:22.223070  158337 status.go:371] ha-827655 host status = "Stopped" (err=<nil>)
	I1009 19:10:22.223085  158337 status.go:384] host is not running, skipping remaining checks
	I1009 19:10:22.223091  158337 status.go:176] ha-827655 status: &{Name:ha-827655 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:10:22.223110  158337 status.go:174] checking status of ha-827655-m02 ...
	I1009 19:10:22.223422  158337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:22.223480  158337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:22.236864  158337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1009 19:10:22.237290  158337 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:22.237694  158337 main.go:141] libmachine: Using API Version  1
	I1009 19:10:22.237721  158337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:22.238037  158337 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:22.238206  158337 main.go:141] libmachine: (ha-827655-m02) Calling .GetState
	I1009 19:10:22.240174  158337 status.go:371] ha-827655-m02 host status = "Stopped" (err=<nil>)
	I1009 19:10:22.240198  158337 status.go:384] host is not running, skipping remaining checks
	I1009 19:10:22.240203  158337 status.go:176] ha-827655-m02 status: &{Name:ha-827655-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:10:22.240219  158337 status.go:174] checking status of ha-827655-m04 ...
	I1009 19:10:22.240608  158337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:22.240653  158337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:22.253915  158337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1009 19:10:22.254371  158337 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:22.254811  158337 main.go:141] libmachine: Using API Version  1
	I1009 19:10:22.254836  158337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:22.255162  158337 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:22.255387  158337 main.go:141] libmachine: (ha-827655-m04) Calling .GetState
	I1009 19:10:22.257217  158337 status.go:371] ha-827655-m04 host status = "Stopped" (err=<nil>)
	I1009 19:10:22.257238  158337 status.go:384] host is not running, skipping remaining checks
	I1009 19:10:22.257245  158337 status.go:176] ha-827655-m04 status: &{Name:ha-827655-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (249.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (105.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:11:15.370785  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m44.617884208s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (105.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (108.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 node add --control-plane --alsologtostderr -v 5
E1009 19:12:38.438800  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:12:59.346828  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-827655 node add --control-plane --alsologtostderr -v 5: (1m47.949198409s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-827655 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (108.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-055222 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-055222 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.480258929s)
--- PASS: TestJSONOutput/start/Command (88.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-055222 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-055222 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-055222 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-055222 --output=json --user=testUser: (6.951671793s)
--- PASS: TestJSONOutput/stop/Command (6.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-075509 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-075509 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (59.481307ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"120b3c97-c413-4005-9713-4d6fd244eb43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-075509] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba34aa37-a148-4c79-9ea3-c65aa4c9c55f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"49c9fc69-6c5d-4286-8941-f952375d3b20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5133745d-8c5f-4cdf-b0eb-834c6c1ce450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig"}}
	{"specversion":"1.0","id":"125bde99-609c-48f7-9cfc-934262a90280","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube"}}
	{"specversion":"1.0","id":"72b77573-2ab9-4951-8249-2591302ec292","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"82031810-0832-4f88-9daf-e6f1a24bb485","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"77518b5f-10b1-41ec-967f-de49fd29f002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-075509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-075509
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (83.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-092839 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:16:15.370760  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-092839 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.685632534s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-095757 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-095757 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.502230964s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-092839
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-095757
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-095757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-095757
helpers_test.go:175: Cleaning up "first-092839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-092839
--- PASS: TestMinikubeProfile (83.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-751764 --memory=3072 --mount-string /tmp/TestMountStartserial1016540232/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-751764 --memory=3072 --mount-string /tmp/TestMountStartserial1016540232/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.30651693s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-751764 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-751764 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-770005 --memory=3072 --mount-string /tmp/TestMountStartserial1016540232/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-770005 --memory=3072 --mount-string /tmp/TestMountStartserial1016540232/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.855178489s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-770005 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-770005 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-751764 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-770005 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-770005 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-770005
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-770005: (1.305126721s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-770005
E1009 19:17:59.346658  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-770005: (19.711053543s)
--- PASS: TestMountStart/serial/RestartStopped (20.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-770005 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-770005 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396378 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396378 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m43.313323931s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-396378 -- rollout status deployment/busybox: (5.164133938s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-6hhgt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-g47b6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-6hhgt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-g47b6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-6hhgt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-g47b6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-6hhgt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-6hhgt -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-g47b6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-396378 -- exec busybox-7b57f96db7-g47b6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-396378 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-396378 -v=5 --alsologtostderr: (45.041593961s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-396378 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp testdata/cp-test.txt multinode-396378:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile417125874/001/cp-test_multinode-396378.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378:/home/docker/cp-test.txt multinode-396378-m02:/home/docker/cp-test_multinode-396378_multinode-396378-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m02 "sudo cat /home/docker/cp-test_multinode-396378_multinode-396378-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378:/home/docker/cp-test.txt multinode-396378-m03:/home/docker/cp-test_multinode-396378_multinode-396378-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m03 "sudo cat /home/docker/cp-test_multinode-396378_multinode-396378-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp testdata/cp-test.txt multinode-396378-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile417125874/001/cp-test_multinode-396378-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378-m02:/home/docker/cp-test.txt multinode-396378:/home/docker/cp-test_multinode-396378-m02_multinode-396378.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378 "sudo cat /home/docker/cp-test_multinode-396378-m02_multinode-396378.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378-m02:/home/docker/cp-test.txt multinode-396378-m03:/home/docker/cp-test_multinode-396378-m02_multinode-396378-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m03 "sudo cat /home/docker/cp-test_multinode-396378-m02_multinode-396378-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp testdata/cp-test.txt multinode-396378-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile417125874/001/cp-test_multinode-396378-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378-m03:/home/docker/cp-test.txt multinode-396378:/home/docker/cp-test_multinode-396378-m03_multinode-396378.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378 "sudo cat /home/docker/cp-test_multinode-396378-m03_multinode-396378.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 cp multinode-396378-m03:/home/docker/cp-test.txt multinode-396378-m02:/home/docker/cp-test_multinode-396378-m03_multinode-396378-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 ssh -n multinode-396378-m02 "sudo cat /home/docker/cp-test_multinode-396378-m03_multinode-396378-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 node stop m03
E1009 19:21:02.417790  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-396378 node stop m03: (1.786261112s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396378 status: exit status 7 (449.571527ms)

                                                
                                                
-- stdout --
	multinode-396378
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396378-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396378-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr: exit status 7 (450.172508ms)

                                                
                                                
-- stdout --
	multinode-396378
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-396378-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-396378-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:21:03.231967  166550 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:21:03.232205  166550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:03.232214  166550 out.go:374] Setting ErrFile to fd 2...
	I1009 19:21:03.232217  166550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:03.232418  166550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:21:03.232599  166550 out.go:368] Setting JSON to false
	I1009 19:21:03.232621  166550 mustload.go:65] Loading cluster: multinode-396378
	I1009 19:21:03.232755  166550 notify.go:221] Checking for updates...
	I1009 19:21:03.233085  166550 config.go:182] Loaded profile config "multinode-396378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:21:03.233103  166550 status.go:174] checking status of multinode-396378 ...
	I1009 19:21:03.233657  166550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:21:03.233701  166550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:21:03.252208  166550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I1009 19:21:03.252934  166550 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:21:03.253504  166550 main.go:141] libmachine: Using API Version  1
	I1009 19:21:03.253533  166550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:21:03.253950  166550 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:21:03.254156  166550 main.go:141] libmachine: (multinode-396378) Calling .GetState
	I1009 19:21:03.256437  166550 status.go:371] multinode-396378 host status = "Running" (err=<nil>)
	I1009 19:21:03.256457  166550 host.go:66] Checking if "multinode-396378" exists ...
	I1009 19:21:03.256824  166550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:21:03.256905  166550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:21:03.270977  166550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I1009 19:21:03.271455  166550 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:21:03.271868  166550 main.go:141] libmachine: Using API Version  1
	I1009 19:21:03.271888  166550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:21:03.272247  166550 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:21:03.272404  166550 main.go:141] libmachine: (multinode-396378) Calling .GetIP
	I1009 19:21:03.275416  166550 main.go:141] libmachine: (multinode-396378) DBG | domain multinode-396378 has defined MAC address 52:54:00:d5:63:d0 in network mk-multinode-396378
	I1009 19:21:03.275880  166550 main.go:141] libmachine: (multinode-396378) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:63:d0", ip: ""} in network mk-multinode-396378: {Iface:virbr1 ExpiryTime:2025-10-09 20:18:31 +0000 UTC Type:0 Mac:52:54:00:d5:63:d0 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-396378 Clientid:01:52:54:00:d5:63:d0}
	I1009 19:21:03.275906  166550 main.go:141] libmachine: (multinode-396378) DBG | domain multinode-396378 has defined IP address 192.168.39.38 and MAC address 52:54:00:d5:63:d0 in network mk-multinode-396378
	I1009 19:21:03.276035  166550 host.go:66] Checking if "multinode-396378" exists ...
	I1009 19:21:03.276325  166550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:21:03.276362  166550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:21:03.291182  166550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42033
	I1009 19:21:03.291711  166550 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:21:03.292178  166550 main.go:141] libmachine: Using API Version  1
	I1009 19:21:03.292204  166550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:21:03.292593  166550 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:21:03.292790  166550 main.go:141] libmachine: (multinode-396378) Calling .DriverName
	I1009 19:21:03.293000  166550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:21:03.293033  166550 main.go:141] libmachine: (multinode-396378) Calling .GetSSHHostname
	I1009 19:21:03.296600  166550 main.go:141] libmachine: (multinode-396378) DBG | domain multinode-396378 has defined MAC address 52:54:00:d5:63:d0 in network mk-multinode-396378
	I1009 19:21:03.297079  166550 main.go:141] libmachine: (multinode-396378) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:63:d0", ip: ""} in network mk-multinode-396378: {Iface:virbr1 ExpiryTime:2025-10-09 20:18:31 +0000 UTC Type:0 Mac:52:54:00:d5:63:d0 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-396378 Clientid:01:52:54:00:d5:63:d0}
	I1009 19:21:03.297109  166550 main.go:141] libmachine: (multinode-396378) DBG | domain multinode-396378 has defined IP address 192.168.39.38 and MAC address 52:54:00:d5:63:d0 in network mk-multinode-396378
	I1009 19:21:03.297256  166550 main.go:141] libmachine: (multinode-396378) Calling .GetSSHPort
	I1009 19:21:03.297456  166550 main.go:141] libmachine: (multinode-396378) Calling .GetSSHKeyPath
	I1009 19:21:03.297646  166550 main.go:141] libmachine: (multinode-396378) Calling .GetSSHUsername
	I1009 19:21:03.297820  166550 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/multinode-396378/id_rsa Username:docker}
	I1009 19:21:03.384813  166550 ssh_runner.go:195] Run: systemctl --version
	I1009 19:21:03.392541  166550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:21:03.413685  166550 kubeconfig.go:125] found "multinode-396378" server: "https://192.168.39.38:8443"
	I1009 19:21:03.413727  166550 api_server.go:166] Checking apiserver status ...
	I1009 19:21:03.413767  166550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:21:03.439370  166550 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	W1009 19:21:03.452061  166550 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:21:03.452113  166550 ssh_runner.go:195] Run: ls
	I1009 19:21:03.458133  166550 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I1009 19:21:03.462361  166550 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I1009 19:21:03.462383  166550 status.go:463] multinode-396378 apiserver status = Running (err=<nil>)
	I1009 19:21:03.462392  166550 status.go:176] multinode-396378 status: &{Name:multinode-396378 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:21:03.462406  166550 status.go:174] checking status of multinode-396378-m02 ...
	I1009 19:21:03.462688  166550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:21:03.462724  166550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:21:03.476277  166550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43457
	I1009 19:21:03.476723  166550 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:21:03.477158  166550 main.go:141] libmachine: Using API Version  1
	I1009 19:21:03.477177  166550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:21:03.477580  166550 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:21:03.477804  166550 main.go:141] libmachine: (multinode-396378-m02) Calling .GetState
	I1009 19:21:03.479508  166550 status.go:371] multinode-396378-m02 host status = "Running" (err=<nil>)
	I1009 19:21:03.479527  166550 host.go:66] Checking if "multinode-396378-m02" exists ...
	I1009 19:21:03.479838  166550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:21:03.479904  166550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:21:03.493437  166550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40371
	I1009 19:21:03.493902  166550 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:21:03.494270  166550 main.go:141] libmachine: Using API Version  1
	I1009 19:21:03.494285  166550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:21:03.494628  166550 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:21:03.494808  166550 main.go:141] libmachine: (multinode-396378-m02) Calling .GetIP
	I1009 19:21:03.497873  166550 main.go:141] libmachine: (multinode-396378-m02) DBG | domain multinode-396378-m02 has defined MAC address 52:54:00:26:1e:26 in network mk-multinode-396378
	I1009 19:21:03.498368  166550 main.go:141] libmachine: (multinode-396378-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1e:26", ip: ""} in network mk-multinode-396378: {Iface:virbr1 ExpiryTime:2025-10-09 20:19:27 +0000 UTC Type:0 Mac:52:54:00:26:1e:26 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-396378-m02 Clientid:01:52:54:00:26:1e:26}
	I1009 19:21:03.498409  166550 main.go:141] libmachine: (multinode-396378-m02) DBG | domain multinode-396378-m02 has defined IP address 192.168.39.150 and MAC address 52:54:00:26:1e:26 in network mk-multinode-396378
	I1009 19:21:03.498526  166550 host.go:66] Checking if "multinode-396378-m02" exists ...
	I1009 19:21:03.498916  166550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:21:03.498981  166550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:21:03.513182  166550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I1009 19:21:03.513738  166550 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:21:03.514252  166550 main.go:141] libmachine: Using API Version  1
	I1009 19:21:03.514272  166550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:21:03.514607  166550 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:21:03.514824  166550 main.go:141] libmachine: (multinode-396378-m02) Calling .DriverName
	I1009 19:21:03.515063  166550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:21:03.515092  166550 main.go:141] libmachine: (multinode-396378-m02) Calling .GetSSHHostname
	I1009 19:21:03.518132  166550 main.go:141] libmachine: (multinode-396378-m02) DBG | domain multinode-396378-m02 has defined MAC address 52:54:00:26:1e:26 in network mk-multinode-396378
	I1009 19:21:03.518598  166550 main.go:141] libmachine: (multinode-396378-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:1e:26", ip: ""} in network mk-multinode-396378: {Iface:virbr1 ExpiryTime:2025-10-09 20:19:27 +0000 UTC Type:0 Mac:52:54:00:26:1e:26 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-396378-m02 Clientid:01:52:54:00:26:1e:26}
	I1009 19:21:03.518625  166550 main.go:141] libmachine: (multinode-396378-m02) DBG | domain multinode-396378-m02 has defined IP address 192.168.39.150 and MAC address 52:54:00:26:1e:26 in network mk-multinode-396378
	I1009 19:21:03.518767  166550 main.go:141] libmachine: (multinode-396378-m02) Calling .GetSSHPort
	I1009 19:21:03.518939  166550 main.go:141] libmachine: (multinode-396378-m02) Calling .GetSSHKeyPath
	I1009 19:21:03.519080  166550 main.go:141] libmachine: (multinode-396378-m02) Calling .GetSSHUsername
	I1009 19:21:03.519209  166550 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-136449/.minikube/machines/multinode-396378-m02/id_rsa Username:docker}
	I1009 19:21:03.598322  166550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:21:03.615105  166550 status.go:176] multinode-396378-m02 status: &{Name:multinode-396378-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:21:03.615175  166550 status.go:174] checking status of multinode-396378-m03 ...
	I1009 19:21:03.615483  166550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:21:03.615525  166550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:21:03.630295  166550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35183
	I1009 19:21:03.630774  166550 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:21:03.631197  166550 main.go:141] libmachine: Using API Version  1
	I1009 19:21:03.631224  166550 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:21:03.631588  166550 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:21:03.631768  166550 main.go:141] libmachine: (multinode-396378-m03) Calling .GetState
	I1009 19:21:03.633418  166550 status.go:371] multinode-396378-m03 host status = "Stopped" (err=<nil>)
	I1009 19:21:03.633432  166550 status.go:384] host is not running, skipping remaining checks
	I1009 19:21:03.633437  166550 status.go:176] multinode-396378-m03 status: &{Name:multinode-396378-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 node start m03 -v=5 --alsologtostderr
E1009 19:21:15.371028  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-396378 node start m03 -v=5 --alsologtostderr: (39.808668055s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (311.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396378
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-396378
E1009 19:22:59.347410  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-396378: (2m52.584989984s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396378 --wait=true -v=5 --alsologtostderr
E1009 19:26:15.366778  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396378 --wait=true -v=5 --alsologtostderr: (2m19.205255723s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396378
--- PASS: TestMultiNode/serial/RestartKeepsNodes (311.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-396378 node delete m03: (2.161842971s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (168.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 stop
E1009 19:27:59.347293  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:29:18.443152  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-396378 stop: (2m48.202774506s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396378 status: exit status 7 (93.060915ms)

                                                
                                                
-- stdout --
	multinode-396378
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-396378-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr: exit status 7 (80.083362ms)

                                                
                                                
-- stdout --
	multinode-396378
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-396378-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:29:47.030705  169396 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:29:47.030816  169396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:29:47.030822  169396 out.go:374] Setting ErrFile to fd 2...
	I1009 19:29:47.030826  169396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:29:47.031008  169396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:29:47.031163  169396 out.go:368] Setting JSON to false
	I1009 19:29:47.031185  169396 mustload.go:65] Loading cluster: multinode-396378
	I1009 19:29:47.031223  169396 notify.go:221] Checking for updates...
	I1009 19:29:47.031581  169396 config.go:182] Loaded profile config "multinode-396378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:29:47.031595  169396 status.go:174] checking status of multinode-396378 ...
	I1009 19:29:47.031969  169396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:29:47.032003  169396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:29:47.045844  169396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35471
	I1009 19:29:47.046303  169396 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:29:47.046902  169396 main.go:141] libmachine: Using API Version  1
	I1009 19:29:47.046928  169396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:29:47.047324  169396 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:29:47.047521  169396 main.go:141] libmachine: (multinode-396378) Calling .GetState
	I1009 19:29:47.049301  169396 status.go:371] multinode-396378 host status = "Stopped" (err=<nil>)
	I1009 19:29:47.049315  169396 status.go:384] host is not running, skipping remaining checks
	I1009 19:29:47.049319  169396 status.go:176] multinode-396378 status: &{Name:multinode-396378 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:29:47.049338  169396 status.go:174] checking status of multinode-396378-m02 ...
	I1009 19:29:47.049667  169396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:29:47.049705  169396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:29:47.062611  169396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I1009 19:29:47.063109  169396 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:29:47.063639  169396 main.go:141] libmachine: Using API Version  1
	I1009 19:29:47.063665  169396 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:29:47.064075  169396 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:29:47.064287  169396 main.go:141] libmachine: (multinode-396378-m02) Calling .GetState
	I1009 19:29:47.065984  169396 status.go:371] multinode-396378-m02 host status = "Stopped" (err=<nil>)
	I1009 19:29:47.066001  169396 status.go:384] host is not running, skipping remaining checks
	I1009 19:29:47.066008  169396 status.go:176] multinode-396378-m02 status: &{Name:multinode-396378-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (168.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (127.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396378 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:31:15.371270  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396378 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m6.865903095s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-396378 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (127.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-396378
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396378-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-396378-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (63.179453ms)

                                                
                                                
-- stdout --
	* [multinode-396378-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-396378-m02' is duplicated with machine name 'multinode-396378-m02' in profile 'multinode-396378'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-396378-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-396378-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.766948037s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-396378
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-396378: exit status 80 (214.033604ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-396378 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-396378-m03 already exists in multinode-396378-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-396378-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.91s)

                                                
                                    
x
+
TestScheduledStopUnix (112.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-148585 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-148585 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.694105999s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-148585 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-148585 -n scheduled-stop-148585
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-148585 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 19:36:01.976492  140358 retry.go:31] will retry after 105.23µs: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.977695  140358 retry.go:31] will retry after 165.139µs: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.978882  140358 retry.go:31] will retry after 227.065µs: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.980098  140358 retry.go:31] will retry after 501.259µs: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.981250  140358 retry.go:31] will retry after 556.529µs: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.982391  140358 retry.go:31] will retry after 561.752µs: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.983516  140358 retry.go:31] will retry after 817.122µs: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.984604  140358 retry.go:31] will retry after 1.042942ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.985722  140358 retry.go:31] will retry after 2.260825ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.988939  140358 retry.go:31] will retry after 2.059697ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.991105  140358 retry.go:31] will retry after 5.596931ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:01.997397  140358 retry.go:31] will retry after 5.499976ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:02.003684  140358 retry.go:31] will retry after 14.150246ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:02.018005  140358 retry.go:31] will retry after 14.023674ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
I1009 19:36:02.032305  140358 retry.go:31] will retry after 41.424612ms: open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/scheduled-stop-148585/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-148585 --cancel-scheduled
E1009 19:36:15.370834  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-148585 -n scheduled-stop-148585
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-148585
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-148585 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-148585
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-148585: exit status 7 (63.125145ms)

                                                
                                                
-- stdout --
	scheduled-stop-148585
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-148585 -n scheduled-stop-148585
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-148585 -n scheduled-stop-148585: exit status 7 (64.481375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-148585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-148585
--- PASS: TestScheduledStopUnix (112.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (151.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.884116591 start -p running-upgrade-235499 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:37:42.419189  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.884116591 start -p running-upgrade-235499 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.539288401s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-235499 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-235499 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.375983953s)
helpers_test.go:175: Cleaning up "running-upgrade-235499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-235499
--- PASS: TestRunningBinaryUpgrade (151.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (177.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.318360283s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-300653
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-300653: (2.391741982s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-300653 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-300653 status --format={{.Host}}: exit status 7 (79.949776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1009 19:37:59.343840  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m12.095566728s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-300653 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (96.042611ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-300653] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-300653
	    minikube start -p kubernetes-upgrade-300653 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3006532 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-300653 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-300653 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.750180926s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-300653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-300653
--- PASS: TestKubernetesUpgrade (177.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-195549 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (75.858993ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-195549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (81.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.470407258s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-195549 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (81.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-980148 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-980148 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (116.014942ms)

                                                
                                                
-- stdout --
	* [false-980148] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:38:33.852664  174894 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:38:33.852992  174894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:33.853006  174894 out.go:374] Setting ErrFile to fd 2...
	I1009 19:38:33.853013  174894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:38:33.853323  174894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-136449/.minikube/bin
	I1009 19:38:33.854033  174894 out.go:368] Setting JSON to false
	I1009 19:38:33.855297  174894 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8454,"bootTime":1760030260,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:38:33.855419  174894 start.go:143] virtualization: kvm guest
	I1009 19:38:33.857373  174894 out.go:179] * [false-980148] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:38:33.858700  174894 out.go:179]   - MINIKUBE_LOCATION=21683
	I1009 19:38:33.858721  174894 notify.go:221] Checking for updates...
	I1009 19:38:33.860878  174894 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:38:33.862176  174894 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-136449/kubeconfig
	I1009 19:38:33.863190  174894 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-136449/.minikube
	I1009 19:38:33.864242  174894 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:38:33.865321  174894 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:38:33.866985  174894 config.go:182] Loaded profile config "NoKubernetes-195549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:33.867118  174894 config.go:182] Loaded profile config "kubernetes-upgrade-300653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:38:33.867219  174894 config.go:182] Loaded profile config "running-upgrade-235499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1009 19:38:33.867324  174894 driver.go:422] Setting default libvirt URI to qemu:///system
	I1009 19:38:33.902384  174894 out.go:179] * Using the kvm2 driver based on user configuration
	I1009 19:38:33.903400  174894 start.go:309] selected driver: kvm2
	I1009 19:38:33.903414  174894 start.go:930] validating driver "kvm2" against <nil>
	I1009 19:38:33.903426  174894 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:38:33.905359  174894 out.go:203] 
	W1009 19:38:33.906488  174894 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 19:38:33.907500  174894 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-980148 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-980148" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 09 Oct 2025 19:38:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.159:8443
name: NoKubernetes-195549
contexts:
- context:
cluster: NoKubernetes-195549
extensions:
- extension:
last-update: Thu, 09 Oct 2025 19:38:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-195549
name: NoKubernetes-195549
current-context: NoKubernetes-195549
kind: Config
users:
- name: NoKubernetes-195549
user:
client-certificate: /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/NoKubernetes-195549/client.crt
client-key: /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/NoKubernetes-195549/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-980148

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-980148"

                                                
                                                
----------------------- debugLogs end: false-980148 [took: 3.64486834s] --------------------------------
helpers_test.go:175: Cleaning up "false-980148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-980148
--- PASS: TestNetworkPlugins/group/false (3.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (27.80389367s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-195549 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-195549 status -o json: exit status 2 (291.049361ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-195549","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-195549
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195549 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.409417523s)
--- PASS: TestNoKubernetes/serial/Start (47.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-195549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-195549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (188.147786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-195549
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-195549: (1.289593221s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-195549 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-195549 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.26719972s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2018249906 start -p stopped-upgrade-716476 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2018249906 start -p stopped-upgrade-716476 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m8.685865524s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2018249906 -p stopped-upgrade-716476 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2018249906 -p stopped-upgrade-716476 stop: (1.627599602s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-716476 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-716476 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.655787778s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-195549 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-195549 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.59165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (115.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-612343 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-612343 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m55.951700328s)
--- PASS: TestPause/serial/Start (115.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (104.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m44.949521558s)
--- PASS: TestNetworkPlugins/group/auto/Start (104.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-716476
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-716476: (1.091572219s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (59.026835236s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-980148 "pgrep -a kubelet"
I1009 19:43:20.418502  140358 config.go:182] Loaded profile config "auto-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-980148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b8kgp" [8146429c-a53e-4e02-9288-f9446c3c4be2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b8kgp" [8146429c-a53e-4e02-9288-f9446c3c4be2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003396675s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-980148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.92099705s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rf9zc" [b5fe49fa-1dcb-42dc-96c5-0b97cd74b8fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006590981s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-980148 "pgrep -a kubelet"
I1009 19:43:57.368656  140358 config.go:182] Loaded profile config "kindnet-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-980148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-httg6" [99d4fc8a-7f3a-441d-84c5-a78ceb2e413e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-httg6" [99d4fc8a-7f3a-441d-84c5-a78ceb2e413e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004464818s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-980148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.824753864s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.847805846s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (109.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m49.719506321s)
--- PASS: TestNetworkPlugins/group/flannel/Start (109.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-t5zbm" [cad6f89a-289c-4bb3-a1d5-42243d3ccfad] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-t5zbm" [cad6f89a-289c-4bb3-a1d5-42243d3ccfad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006428016s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-980148 "pgrep -a kubelet"
I1009 19:45:10.366689  140358 config.go:182] Loaded profile config "calico-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-980148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g47br" [84119955-d433-4efb-87de-2595141f6d21] Pending
helpers_test.go:352: "netcat-cd4db9dbf-g47br" [84119955-d433-4efb-87de-2595141f6d21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g47br" [84119955-d433-4efb-87de-2595141f6d21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004549333s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-980148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-980148 "pgrep -a kubelet"
I1009 19:45:36.256109  140358 config.go:182] Loaded profile config "custom-flannel-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-980148 replace --force -f testdata/netcat-deployment.yaml
I1009 19:45:36.504475  140358 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gx85z" [c25e4daa-8c56-4e61-9d1e-2f687c4941f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gx85z" [c25e4daa-8c56-4e61-9d1e-2f687c4941f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004451685s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-980148 "pgrep -a kubelet"
I1009 19:45:41.200243  140358 config.go:182] Loaded profile config "enable-default-cni-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-980148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mks8q" [9f4925ce-9c15-476f-af4f-9c0c0331d364] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mks8q" [9f4925ce-9c15-476f-af4f-9c0c0331d364] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005407442s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-980148 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.119997778s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-980148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-980148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (99.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-838837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-838837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m39.811019972s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (99.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (124.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-867423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:46:15.366143  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-867423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (2m4.767713189s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (124.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-hj5hn" [54ad432d-3a36-4b5c-b307-8f2ae1f8e5d0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006526498s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-980148 "pgrep -a kubelet"
I1009 19:46:29.354274  140358 config.go:182] Loaded profile config "flannel-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-980148 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fjpx9" [d2b48619-fe02-4748-b2d1-60a3dec4174d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fjpx9" [d2b48619-fe02-4748-b2d1-60a3dec4174d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003891958s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-980148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-858089 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-858089 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m35.443316993s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-980148 "pgrep -a kubelet"
I1009 19:47:10.189077  140358 config.go:182] Loaded profile config "bridge-980148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-980148 replace --force -f testdata/netcat-deployment.yaml
I1009 19:47:11.381389  140358 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qwpvd" [8ddb5f5d-321f-4360-866e-dba333d96054] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qwpvd" [8ddb5f5d-321f-4360-866e-dba333d96054] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003848433s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-980148 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-980148 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E1009 19:51:15.365958  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/functional-413212/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:17.463389  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-407620 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-407620 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m25.060295967s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-838837 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [616fe8bc-f2de-4efe-83f7-987915d3bb94] Pending
helpers_test.go:352: "busybox" [616fe8bc-f2de-4efe-83f7-987915d3bb94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [616fe8bc-f2de-4efe-83f7-987915d3bb94] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004052753s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-838837 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-838837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-838837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105052596s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-838837 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (89.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-838837 --alsologtostderr -v=3
E1009 19:47:59.344541  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/addons-916037/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-838837 --alsologtostderr -v=3: (1m29.81551864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (89.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-867423 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [23181104-30fe-4a34-a3a6-0eae85002398] Pending
helpers_test.go:352: "busybox" [23181104-30fe-4a34-a3a6-0eae85002398] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [23181104-30fe-4a34-a3a6-0eae85002398] Running
E1009 19:48:20.634478  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:20.640900  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:20.652372  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:20.673784  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:20.715331  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:20.796789  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:20.958455  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:21.280118  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:21.921492  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:23.203479  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004869273s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-867423 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-867423 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1009 19:48:25.765421  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-867423 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.042552514s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-867423 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (74.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-867423 --alsologtostderr -v=3
E1009 19:48:30.887634  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-867423 --alsologtostderr -v=3: (1m14.009660001s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (74.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-858089 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e770169d-9d03-4e09-8c5d-494f5a4a21f7] Pending
helpers_test.go:352: "busybox" [e770169d-9d03-4e09-8c5d-494f5a4a21f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e770169d-9d03-4e09-8c5d-494f5a4a21f7] Running
E1009 19:48:41.129761  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004919783s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-858089 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-858089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-858089 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (70.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-858089 --alsologtostderr -v=3
E1009 19:48:51.120653  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:51.127051  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:51.138476  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:51.159950  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:51.201687  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:51.283159  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:51.444800  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:51.766784  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:52.408893  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:53.691180  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:48:56.252471  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:01.373791  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:49:01.611753  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-858089 --alsologtostderr -v=3: (1m10.696957304s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (70.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-407620 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b6519634-8a18-408f-b63f-634d2885f2f6] Pending
helpers_test.go:352: "busybox" [b6519634-8a18-408f-b63f-634d2885f2f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1009 19:49:11.615487  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [b6519634-8a18-408f-b63f-634d2885f2f6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004526056s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-407620 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-407620 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-407620 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-407620 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-407620 --alsologtostderr -v=3: (1m27.547554832s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838837 -n old-k8s-version-838837
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838837 -n old-k8s-version-838837: exit status 7 (62.54507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-838837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-838837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E1009 19:49:32.096865  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-838837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (45.77053054s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838837 -n old-k8s-version-838837
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-867423 -n no-preload-867423
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-867423 -n no-preload-867423: exit status 7 (68.063125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-867423 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-867423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:49:42.574023  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-867423 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m31.703975485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-867423 -n no-preload-867423
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (92.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858089 -n embed-certs-858089
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858089 -n embed-certs-858089: exit status 7 (89.458316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-858089 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-858089 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:50:04.103198  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:04.109683  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:04.121182  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:04.142639  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:04.184925  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:04.266421  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:04.428519  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:04.750766  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:05.392606  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:06.674716  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:09.236882  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:13.059147  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/kindnet-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:14.358722  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-858089 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (52.406356636s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-858089 -n embed-certs-858089
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (21.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-j2sn5" [7b4c227a-7c7c-4e69-9e44-5d3954fb54cc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1009 19:50:24.600725  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-j2sn5" [7b4c227a-7c7c-4e69-9e44-5d3954fb54cc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.005064634s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (21.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-j2sn5" [7b4c227a-7c7c-4e69-9e44-5d3954fb54cc] Running
E1009 19:50:36.484260  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:36.490678  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:36.502385  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:36.523894  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:36.565635  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:36.647512  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:36.809494  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:37.131288  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:37.773715  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:39.055662  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.461137  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.467539  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.479067  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.500508  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.541952  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.617486  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.623832  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:50:41.785313  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003791429s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-838837 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-838837 image list --format=json
E1009 19:50:42.107158  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-838837 --alsologtostderr -v=1
E1009 19:50:42.748814  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838837 -n old-k8s-version-838837
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838837 -n old-k8s-version-838837: exit status 2 (290.05368ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-838837 -n old-k8s-version-838837
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-838837 -n old-k8s-version-838837: exit status 2 (290.010958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-838837 --alsologtostderr -v=1
E1009 19:50:44.030922  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838837 -n old-k8s-version-838837
E1009 19:50:45.082024  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-838837 -n old-k8s-version-838837
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620
E1009 19:50:46.592929  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620: exit status 7 (91.637724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-407620 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1009 19:50:46.739746  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-407620 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-407620 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (48.201705932s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (67.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-873996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-873996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m7.168861979s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (67.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gljrd" [2ee8b438-d9a3-4c01-b59d-f496ed94d2fd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1009 19:50:51.715118  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gljrd" [2ee8b438-d9a3-4c01-b59d-f496ed94d2fd] Running
E1009 19:50:56.982038  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:01.956832  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004670614s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gljrd" [2ee8b438-d9a3-4c01-b59d-f496ed94d2fd] Running
E1009 19:51:04.495445  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/auto-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005867798s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-858089 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-858089 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-858089 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-858089 --alsologtostderr -v=1: (1.225240331s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858089 -n embed-certs-858089
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858089 -n embed-certs-858089: exit status 2 (313.534021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-858089 -n embed-certs-858089
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-858089 -n embed-certs-858089: exit status 2 (318.263199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-858089 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-858089 --alsologtostderr -v=1: (1.069947033s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-858089 -n embed-certs-858089
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-858089 -n embed-certs-858089
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9tr5" [1a1d7708-8e39-47ce-89c7-b1bcf1f4dd0f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006756043s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9tr5" [1a1d7708-8e39-47ce-89c7-b1bcf1f4dd0f] Running
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9tr5" [1a1d7708-8e39-47ce-89c7-b1bcf1f4dd0f] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9tr5" [1a1d7708-8e39-47ce-89c7-b1bcf1f4dd0f] Running
E1009 19:51:22.439092  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:23.109415  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:23.115825  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:23.127278  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:23.149103  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:23.190519  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:23.272004  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:23.433575  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003623818s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-867423 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-867423 image list --format=json
E1009 19:51:23.755252  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-867423 --alsologtostderr -v=1
E1009 19:51:24.396615  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-867423 -n no-preload-867423
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-867423 -n no-preload-867423: exit status 2 (268.462771ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-867423 -n no-preload-867423
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-867423 -n no-preload-867423: exit status 2 (293.147778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-867423 --alsologtostderr -v=1
E1009 19:51:25.678921  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:51:26.044398  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/calico-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-867423 -n no-preload-867423
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-867423 -n no-preload-867423
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zw5mp" [8da4bd95-ea33-4649-8f0e-ccabe75ef581] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zw5mp" [8da4bd95-ea33-4649-8f0e-ccabe75ef581] Running
E1009 19:51:43.604043  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004639373s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zw5mp" [8da4bd95-ea33-4649-8f0e-ccabe75ef581] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004316638s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-407620 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-407620 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-407620 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620: exit status 2 (246.155515ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620: exit status 2 (268.126005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-407620 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-407620 -n default-k8s-diff-port-407620
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-873996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-873996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073008498s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-873996 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-873996 --alsologtostderr -v=3: (2.114567235s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-873996 -n newest-cni-873996
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-873996 -n newest-cni-873996: exit status 7 (63.979107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-873996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-873996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1009 19:51:58.425746  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/custom-flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:03.401365  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/enable-default-cni-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:04.085471  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/flannel-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.187771  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.194217  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.205680  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.227174  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.268668  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.350228  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.511961  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:11.833718  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:12.475793  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:13.757588  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:16.319082  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:21.441431  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:52:31.683750  140358 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/bridge-980148/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-873996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (35.653837937s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-873996 -n newest-cni-873996
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-873996 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-873996 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-873996 --alsologtostderr -v=1: (1.674348934s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-873996 -n newest-cni-873996
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-873996 -n newest-cni-873996: exit status 2 (298.438746ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-873996 -n newest-cni-873996
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-873996 -n newest-cni-873996: exit status 2 (323.647761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-873996 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-873996 -n newest-cni-873996
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-873996 -n newest-cni-873996
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.93s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.33
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.21
268 TestNetworkPlugins/group/cilium 5.77
282 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-916037 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-980148 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-980148" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-980148

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-980148"

                                                
                                                
----------------------- debugLogs end: kubenet-980148 [took: 3.047307305s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-980148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-980148
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-980148 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-980148" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-136449/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 09 Oct 2025 19:38:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.159:8443
name: NoKubernetes-195549
contexts:
- context:
cluster: NoKubernetes-195549
extensions:
- extension:
last-update: Thu, 09 Oct 2025 19:38:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-195549
name: NoKubernetes-195549
current-context: NoKubernetes-195549
kind: Config
users:
- name: NoKubernetes-195549
user:
client-certificate: /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/NoKubernetes-195549/client.crt
client-key: /home/jenkins/minikube-integration/21683-136449/.minikube/profiles/NoKubernetes-195549/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-980148

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-980148" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-980148"

                                                
                                                
----------------------- debugLogs end: cilium-980148 [took: 5.596307648s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-980148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-980148
--- SKIP: TestNetworkPlugins/group/cilium (5.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-880169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-880169
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard